I recently had the wonderful opportunity to participate on a panel about OER at the Knewton Education Symposium. Earlier this week, Knewton CEO Jose Ferreira blogged about ‘OER and the Future of Publishing’ for EdSurge, briefly mentioning the panel. I was surprised by his post, which goes out of its way to reassure publishers that OER will not break the textbook industry.

Much of the article is spent criticizing the low production values, lack of instructional design, and missing support that often characterize OER. The article argues that there is a potential role for publishers to play in each of these service categories, leveraging OER to lower their costs and improve their products. But it’s been over 15 years since the first openly licensed educational materials were published, and major publishers have yet to publish a single textbook based on pre-existing OER. Why?

Exclusivity, Publishing, and OER

The primary reason is that publishers are – quite rationally – committed to the business models that made them incredibly successful businesses. And the core of that model is exclusivity – the contractual right to be the only entity that can offer the print or digital manifestation of Professor Y’s expertise on subject X. Exclusivity is the foundation bedrock of the publishing industry, and no publisher will ever meaningfully invest in building up the reputation and brand of a body of work which is openly licensed. Publisher B would simply sit on the sidelines while Publisher A exhausts its marketing budget persuading the world that it’s version of Professor Y’s open materials are the best in their field. Once Professor Y’s brand is firmly associated with high quality, Publisher B will release it’s own version of Professor Y’s open materials, free-riding on Publisher A’s marketing spend. Publisher A’s marketing efforts actually end up promoting Publisher B’s competing product in a very real way. No, publishers will never put OER at the core of their offerings, because open licensing – guaranteed nonexclusivity – is the antithesis of their entire industrial model. Some playing around in the supplementals market is the closest major publishers will ever come to engaging with OER.

New Models Enabled by OER

However, we are seeing the emergence of a new kind of organization, which is neither invested in preserving existing business models nor burdened with the huge content creation, distribution, and sales infrastructure that a large commercial publisher must support. (This sizable infrastructure, that once represented an insurmountable barrier to entry, is quickly becoming a millstone around the neck of big publishers facing the threat of OER.) The new breed of organization is only too happy to take the role of IBM or Red Hat and provide all the services necessary to make OER a viable alternative to commercial offerings. I had to chuckle a little reading the advice to publishers Jose provides in his post, because that list of services could almost have been copied and pasted my company’s website (Lumen Learning): iterative cycles of instructional design informed by data, integration services, faculty support, etc. I agree wholeheartedly that these are the kinds of services that must be offered to make OER a true competitor to commercial textbooks in the market – but I disagree with the idea that publishers will ever be willing to offer them. That realization is part of what led me to quit a tenured faculty job in a prestigious graduate program to co-found Lumen Learning.

All that said, the emergence of these organizations won’t spell the end of large textbook publishers as we know them. Instead, that distinction will go to the simplest possible metric by which we could measure the impact of the educational materials US students spend billions of dollars per year on: learning outcomes per dollar.

Learning Outcomes per Dollar

No educator would ever consciously make a choice that harmed student learning in order to save money. But what if you could save students significant amounts of money without doing them any academic harm? Going further, what if you could simultaneously save them significant money and improve their learning outcomes? Research on OER is showing, time and again, that this latter scenario is entirely possible. One brief example will demonstrate the point.

A recent article published in Educause Review describes Mercy College’s recent change from a popular math textbook and online practice system bundle provided by a major publisher (~$180 per student), to OER and an open source online practice system. Here are some of the results they reported after a successful pilot semester using OER in 6 sections of basic math:

  • At pilot’s end, Mercy’s Mathematics Department chair announced that, starting in fall 2012, all 27 sections (695 students) in basic mathematics would use [OER].
  • Between spring 2011 [no sections using OER] and fall 2012 [all sections using OER], the math pass rate increased from 48.40 percent to 68.90 percent.
  • Algebra courses dropped their previously used licenses and costly math textbooks and resources, saving students a total of $125,000 the first year.

By switching all sections of basic math to OER, Mercy College saved its students $125,000 in one year and changed their pass rate from 48 to 69 percent – a 44% improvement.

If you read the article carefully, you’ll see that Mercy actually received a fair amount of support in their implementation of OER, which was funded through a grant. So let’s be honest and put the full cost-related details on the table. Mercy (and many other schools) are still receiving the support they previously received for free through their participation in the Kaleidoscope Open Course Initiative. Lumen Learning, whose personnel led the KOCI, now provides those same services to Mercy and other schools for $5 per enrollment.

So let’s do the learning outcomes per dollar math:

  • Popular commercial offering: 48.4% students passing / $180 textbook and online system cost per student = 0.27% students passing per required textbook dollar
  • OER offering: 68.9% students passing / $5 textbook and online system cost per student = 13.78% students passing per required textbook dollar

For the number I call the “OER Impact Factor,” we simply divide these two ratios with OER on top:

  • 13.78% students passing per required textbook dollar / 0.27% students passing per required textbook dollar = 51.03

This basic computation shows that, in Mercy’s basic math example, using OER led to an over 50x increase (i.e., a 5000% improvement) in percentage passing per dollar. No matter how you look at it, that’s a radical improvement.

If similar performance data were available for two construction companies, and a state procurement officer awarded a contract to the vendor that produces demonstrably worse results while costing significantly more, that person would lose his job, if not worse. (As an aside, I’m not aware of any source where a taxpayer can find out what percentage of federal financial aid (for higher ed) or their state public education budget (for K-12) is spent on textbooks, making it impossible to even begin asking these kinds of questions at any scale.) While faculty and departments aren’t subject to exactly the same accountability pressures as state procurement officers, how long can they continue choosing commercial textbook options over OER as this body of research grows?

#winning

Jose ends his post by saying “Publishers who can’t beat OER deserve to go out of business,” and he’s absolutely right. But in this context, “beat” means something very different for OER than it does for publishers. For OER, “beat” means being selected by faculty or departments as the only required textbook listed on the syllabus (I call this a “displacing adoption”). Without a displacing adoption – that is, if OER are adopted in addition to required publisher materials – students may experience an improvement in learning outcomes but will definitely not see a decrease in the price of going to college. Hence, OER “beat” publishers only in the case of a displacing adoption. For publishers, the bar is much lower – to “beat” OER, publishers simply need to remain on the syllabus under the “required” heading.

How are OER supposed to clear this higher bar, particularly given the head start publishers have? OER have only recently started to catch up with publishers in many of the areas where publishers have enjoyed historical advantages, like packaging and distribution (c.f. the amazing work being done by OpenStax, BCCampus OpenEd, Lumen Learning, and others). But OER have been beating publishers on price and learning outcomes for several years now, and proponents of OER would be wise to keep the conversation laser-focused on these two selection criteria. In a fortunate coincidence for us, I believe these are the two criteria that matter most.

OER offerings are always going to win on price – no publisher is ever going to offer their content, hosting platform, analytics, and faculty-facing services in the same zip code as $5 per student. (And when we see the emergence of completely adaptive offerings based on OER – which we will – even if they are more expensive than $5 per student they will still be significantly less expensive than publishers’ adaptive offerings.) Even if OER only manage to produce the same learning results as commercial textbooks (a “no significant difference” research result), they still win on price. “How would you feel about getting the same outcomes for 95% off?” All OER have to do is not produce worse learning results than commercial offerings.

So the best hope for publishers is in creating offerings that genuinely promote significantly better learning outcomes. (I can’t describe how happy I am to have typed that last sentence.) The best opportunity for publishers to soundly defeat OER is through offerings that result in learning outcomes so superior to OER that their increased price is justified. Would you switch from a $5 offering that resulted in a 65% passing rate to a $100 offering that resulted in a 67% passing rate? Would you switch to a $225 offering that resulted in a 70% passing rate? There is obviously some performance threshold at which a rational actor would choose to pay 20 or 40 times more, but it’s not immediately apparent to me where it is.

However, if OER can beat publishers on both price and learning outcomes, as we’re seeing them do, then OER deserve to be selected by faculty and departments over traditional commercial offerings in displacing adoptions.

I was the member of the panel Jose quoted as saying that ‘80% of all general education courses taught in the US will transition to OER in the next 5 years,’ and I honestly believe that’s true. The combined forces of the innovator’s dilemma, the emergence of new, Red Hat-like organizations supporting the ecosystem around OER, the learning outcomes per dollar metric, and the growing national frustration over the cost of higher education all seem to point clearly in this direction.

{ 7 comments }

Aaron Wolf is contributing to a nice thread in the comments under my description of the recently revised definition of the “open” in “open content”. I’ve revised my ShareAlike example to distribute blame evenly across Wikipedia and MIT OCW based on his comments. You can see the current version of the definition at http://opencontent.org/definition/.

I want to address an accusation of Aaron’s here. He mentions other “definitions of Open that bother working to be precise and not vague,” in which category he includes the definitions from the Open Knowledge Foundation, the Free Cultural Works moderators, etc., in apparent contrast to my definition. I have a number of problems with all these definitions. I’ll address the OKF definition here just to provide a specific example.

Being ‘Precise and Not Vague’

First, the OKF definition misses the critical distinction between revising and remixing, lumping these both into the category of “modifications and derivative works.” The distinction between revising and remixing is critical because, among other things, one invokes the specter of license incompatibility while the other does not. People need to understand, plan, and manage against this important difference when they work with open content. You might argue that the difference is implied in the OKF definition, but that’s not “precise.”

Second, the OKF definition uses the language of “access” and not the language of “ownership.” In a world where things are moving increasingly toward streaming services where people can always access but never own anything, this is potentially confusing. Again, you can argue that ownership is implied in the definition, but that’s not “precise.”

Third, the OKF definition qualifies works as open based on their “manner of distribution.” After opening with this statement, 10 of the 11 clarifying bullet points begin with either the phrase “The license” or “The rights.” (The one bullet that does not begin this way could be rewritten in a clearer manner if it did.) Obviously, content qualifies as open based on the rights granted to you in its license and not based on its manner of distribution. Again, you can argue that, given the repeating chorus of rights and license language, this is implied in the definition, but that’s not “precise.”

The 5Rs in the new definition deal with each of these issues much more precisely.

Inheriting a Bright Line from the DFSG

I think the primary problem with many of these definitions is that they take the Debian Free Software Guidelines and try to coerce a document written for software to apply to content. This always results in a poor fit that feels forced. Content is different from software in meaningful ways and deserves its own treatment created specifically around its special affordances. (The OKF definition is particularly forced, as it takes a document written for software and tries, in a single derivative work, to coerce it into applying to both content and data, which are also meaningfully different from each other.)

I imagine that when Aaron says ‘definitions like the OKF definition are more “precise,”‘ what he really means to say is that these definitions draw a bright line with regard to which restrictions licensors can place on uses of content (Attribution and ShareAlike) and which ones they can’t (Noncommercial) if they want to be able to call that content “open.” I specifically refuse to draw a line of this kind in defining the open in open content.

There is a continuum of restrictions in the many licenses used for content (BY, BY NC, BY SA, BY NC SA, etc.), and I don’t find drawing an arbitrary line somewhere along that continuum to be a useful exercise. On the contrary, I find it a counterproductive exercise. Drawing this line allows people to believe that choosing a license just barely on the open side of the line (e.g., BY SA) is “good enough” and that there’s no need to consider being more open. In fact, when the continuum is collapsed into two discrete categories – open or not – the phrase “more open” doesn’t even have a meaning any longer. According to the bright line definitions, BY SA is just as open as BY – they both qualify as “open.”

By destroying the continuum of openness, the “bright line of restrictions” approach robs people of the opportunity to ask themselves questions like “should I be more open?” or “how can I be more open?” We should be doing everything we can to encourage people to ponder on those questions. We should help everyone be as open as possible, not simply “open enough.” That’s one of the main reasons why the “open” in “open content” is defined the way it is.

{ 1 comment }

Earlier this week I read the Wikipedia entry on open content. Suffice it to say I was somewhat disappointed by the way the editors of the page interpreted my writings defining the “open” in open content. I think their interpretation was plausible and legitimate, but it is certainly not the message I intended people to take away after reading the definition. So, the fault for my unhappiness is mine for not having been clearer in my writing.

Consequently, I have refined and clarified the definition, which lives at http://opencontent.org/definition/, including a new heading for the section on license requirements and restrictions, and a new section on technical decisions and ALMS analysis. I present the revised definition and commentary below for quick reference. I’d be very interested in your reactions and feedback.

Hopefully the Wikipedians will update the entry soon…

Defining the “Open” in Open Content

The term “open content” describes any copyrightable work (traditionally excluding software, which is described by other terms like “open source”) that is licensed in a manner that provides users with free and perpetual permission to engage in the 5R activities:

  1. Retain – the right to make, own, and control copies of the content (e.g., download, duplicate, store, and manage)
  2. Reuse – the right to use the content in a wide range of ways (e.g., in a class, in a study group, on a website, in a video)
  3. Revise – the right to adapt, adjust, modify, or alter the content itself (e.g., translate the content into another language)
  4. Remix – the right to combine the original or revised content with other open content to create something new (e.g., incorporate the content into a mashup)
  5. Redistribute – the right to share copies of the original content, your revisions, or your remixes with others (e.g., give a copy of the content to a friend)

Legal Requirements and Restrictions
Make Open Content Less Open

While a free and perpetual grant of the 5R permissions by means of an “open license” qualifies a creative work to be described as open content, many open licenses place requirements (e.g., mandating that derivative works adopt a certain license) and restrictions (e.g., prohibiting “commercial” use) on users as a condition of the grant of the 5R permissions. The inclusion of requirements and restrictions in open licenses make open content less open than it would be without these requirements and restrictions.

There is disagreement in the community about which requirements and restrictions should never, sometimes, or always be included in open licenses. Creative Commons, the most important provider of open licenses for content, offers licenses that prohibit commercial use. While some in the community believe there are important use cases where the noncommercial restriction is desirable, many in the community eschew the restriction. Wikipedia, one of the most important collections of open content, requires all derivative works to adopt a specific license. While they clearly believe this additional requirement promotes their particular use case, it makes Wikipedia content incompatible with content from other important open content collections, such as MIT OpenCourseWare.

Generally speaking, while the choice by open content publishers to use licenses that include requirements and restrictions can optimize their ability to accomplish their own local goals, the choice typically harms the global goals of the broader open content community.

Poor Technical Choices
Make Open Content Less Open

While open licenses provide users with legal permission to engage in the 5R activities, many open content publishers make technical choices that interfere with a user’s ability to engage in those same activities. The ALMS Framework provides a way of thinking about those technical choices and understanding the degree to which they enable or impede a user’s ability to engage in the 5R activities permitted by open licenses. Specifically, the ALMS Framework encourages us to ask questions in four categories:

  1. Access to Editing Tools: Is the open content published in a format that can only be revised or remixed using tools that are extremely expensive (e.g., 3DS MAX)? Is the open content published in an exotic format that can only be revised or remixed using tools that run on an obscure or discontinued platform (e.g., OS/2)? Is the open content published in a format that can be revised or remixed using tools that are freely available and run on all major platforms (e.g., OpenOffice)?
  2. Level of Expertise Required: Is the open content published in a format that requires a significant amount technical expertise to revise or remix (e.g., Blender)? Is the open content published in a format that requires a minimum level of technical expertise to revise or remix (e.g., Word)?
  3. Meaningfully Editable: Is the open content published in a manner that makes its content essentially impossible to revise or remix (e.g., a scanned image of a handwritten document)? Is the open content published in a manner making its content easy to revise or remix (e.g., a text file)?
  4. Self-Sourced: It the format preferred for consuming the open content the same format preferred for revising or remixing the open content (e.g., HTML)? Is the format preferred for consuming the open content different from the format preferred for revising or remixing the open content (e.g. Flash FLA vs SWF)?

Using the ALMS Framework as a guide, open content publishers can make technical choices that enable the greatest number of people possible to engage in the 5R activities. This is not an argument for “dumbing down” all open content to plain text. Rather it is an invitation to open content publishers to be thoughtful in the technical choices they make – whether they are publishing text, images, audio, video, simulations, or other media.

{ 9 comments }