OER: Some Questions and Answers

Earlier this week I read an op-ed – sponsored by Pearson – titled “If OER is the answer, what is the question?” The article poses three questions and answers them. Below I share some thoughts prompted by the article. (The questions from the article are presented in bold; unattributed blockquotes are from the original article.)

How do we deliver better learning experiences to more students?

There are fantastic learning resources out there of all breeds bringing different types of value to the learning process. OER often shine in their variety and ability to deepen resources for niche topics. Where proprietary courseware (textbooks, etextbooks, or online courseware) stand apart is in pedagogical organization and the unique value of authorship. While it’s possible to build a complete course from OER, the finished product often lacks the scaffolding found in courseware authored by single author/editorial/product teams. That scaffolding connects concepts and practice together, guiding students through the content in a way that maximizes learning.

I’m glad that the author goes straight to the issue of student learning. When all is said and done, the degree to which resources like commercial textbooks and OER support student learning is the only thing that matters. (I will use the language of effectiveness rather than efficacy below, for very important reasons I discussed previously.)

Absent any effectiveness data, for decades faculty who were evaluating educational resources had no choice but to settle for characteristics of resources that reasonable people might believe correlate positively with effectiveness. These proxies for effectiveness included famous authors, name brand publishers, large product teams with diverse skill sets, Stephen Spielberg-like production quality in graphic design, layout, and imagery, and highly formalized editorial and review processes. Unfortunately, sometime during these passing decades faculty began to believe that only resources with these proxy characteristics could be effective in supporting learning. They began to doubt that other development models must necessarily result in materials that are less effective. I don’t think it would be controversial to say that this “content worldview” was encouraged by publishers.

A growing number of peer-reviewed studies and other research reports are demonstrating that when faculty who previously used commercial products as their core instructional materials replace them with OER, student learning either stays the same or increases. Hilton’s review of this research suggests that this “same or better” outcome for OER users holds about 93% of the time.

This result – that freely available resources can support student learning as well as very expensive resources – runs counter to people’s intuition that “you get what you pay for.” As we see in other areas (e.g., climate change), when the truth differs significantly from people’s beliefs, there can be a steep communications hill to climb. This has certainly been the case for OER, and is the primary reason why it is so critically important that more empirical research on the relative effectiveness of OER be conducted and published in peer-reviewed academic journals.

While we need a larger, more robust literature addressing the question of the relative effectiveness of commercial resources and OER, there is another sense in which this research is utterly meaningless. If I purchase the rights to an out-of-print textbook from Pearson and relicense it CC BY, is it now more or less effective than it was the day before? This is a ridiculous question.

The question of whether the language on the copyright page will significantly influence student learning is completely irrational, yet such are the questions we must answer over and over again. As people continue to raise questions in the face of mounting evidence, I’m reminded of Nobel Prize-winning economist Daniel Kahneman, who wrote:

The mystery is how a conception that is vulnerable to such obvious counterexamples survived for so long. I can explain it only by a weakness of the scholarly mind that I have often observed in myself. I call it theory-induced blindness: Once you have accepted a theory, it is extraordinarily difficult to notice its flaws. As the psychologist Daniel Gilbert has observed, disbelieving is hard work.

There are no results from the instructional design, learning science, or cognitive science literature demonstrating that the language on the copyright page is a critical factor in promoting student learning. There is a growing body of research demonstrating that OER can be just as effective – or more effective – than commercial materials. It should be obvious to anyone that the features of instructional materials that effectively support learning (e.g., frequent formative assessment opportunities) can appear in educational resources with any copyright license.

The initial empirical results look good for OER, and we should hope they continue to hold. If alternative development models can continue to produce materials that support learning as effectively as the traditional publisher models, there is undoubtedly a positive future for OER. However, if we have to spend $1M per open textbook to achieve effectiveness results on par with commercial publishers, the long-term sustainability of OER is seriously in question.

Can an instructional design-minded instructor provide that missing connective tissue? Absolutely. But how many have the time and motivation to do so? The amount of work required to do it well is one of the major reasons those who can and want to do it also want to be fairly compensated for it. Best-selling authors consistently deliver high-quality, engaging learning experiences that their audience recognizes and seeks out. They do this through complete pedagogical systems that support the educator and learner better than others do, and that’s tough to replicate.

There are two closely related issues here that we can address together using the Drake Equation as a scaffold for our thinking.

The Drake equation estimates the number of civilizations in our galaxy we might be able to communicate with by applying a series of filters, as follows:

N = R* x fp x ne x fl x fi x fc x L; where:

N = the number of civilizations in our galaxy with which communication might be possible (i.e. which are on our current past light cone); and

R* = the average rate of star formation in our galaxy
fp = the fraction of those stars that have planets
ne = the average number of planets that can potentially support life per star that has planets
fl = the fraction of planets that could support life that actually develop life at some point
fi = the fraction of planets with life that actually go on to develop intelligent life (civilizations)
fc = the fraction of civilizations that develop a technology that releases detectable signs of their existence into space
L = the length of time for which such civilizations release detectable signals into space (via Wikipedia)

What does the Drake Equation have to do with OER production and enhancement? It suggests that we might look at issues of faculty time and incentives raised in the op-ed as Drake-like filters:

N = Ce * Pe * T * M; where:

N = number of people who create or improve effective OER; and

Ce = the number of people with sufficient Content Expertise to create or improve effective OER
Pe = the fraction of those people with sufficient Pedagogical Expertise to create or improve effective OER
T = the fraction of those people with Time available to work on OER
M = the fraction of those people with sufficient non-financial Motivation to spend time working on OER

What might this look like in practice? Here is an extremely back of the envelope example for Introduction to Biology:

Ce = 2 teaching faculty per post-secondary institution x 4000 post-secondary institutions = 8000 people
Pe = .5
T = .1
M = .1

N = 8000 * .5 * .1 * .1 = 40 people

I’m confident that both this specific model and these specific numbers are wrong, but I present them in the spirit of Box’s “all models are wrong, but some are useful.” As long as N > 0, extremely interesting things can happen. Even in the case of N = 1 – like the cases of Sal Khan or James Sousa – there can be an incredible impact on teaching and learning. Both Khan and Sousa created thousands of short instructional videos and published them under open licenses. How many more people need to do this work? Certainly not every instructor in the country. The nonrivalrous nature of digital resources (the technical ability to share copies of resources at almost no cost) combined with open licenses (the legal permission to share copies of resources at no cost) means that only a handful of people need to be actively involved in producing or making substantive improvements to OER in order for the public to have free and open access to resources whose effectiveness is on par with those created by commercial publishers.

Perhaps most importantly, the major efforts made by OER workhorses like Khan or Sousa catalyze additional, incremental work by others over time. As Benkler has explained, the smaller the contribution to be made, the more people there are who will have the time and inclination to contribute (c.f. Shirky’s idea of cognitive surplus).

This creates the opportunity for asynchronous, uncoordinated, incremental, continuous improvement that harkens back to Eric Raymond’s notion that “Every good work of software starts by scratching a developer’s personal itch.” Individual instructors make no improvements, small improvements, or large improvements to existing OER based on their own needs and available resources. Some subset of the group that makes changes share those changes back with the community. This kind of “snowball development” is a key characteristic of the most interesting and effective OER.

The future of the sustainable development of effective OER will be characterized by stigmergy. Stigmergy is the watchword for the next decade of OER.

Commercial publishers and those who think along similar lines will immediately point out problems with this alternative development model. The first of these problems relates to the notion of “fair compensation,” incentives, and sustainability raised in the op-ed. This argument harkens back to the classic view of why public goods are underproduced:

Public goods provide a very important example of market failure, in which market-like behavior of individual gain-seeking does not produce efficient results. The production of public goods results in positive externalities which are not remunerated. If private organizations do not reap all the benefits of a public good which they have produced, their incentives to produce it voluntarily might be insufficient…. If too many consumers decide to “free-ride”, private costs exceed private benefits and the incentive to provide the good or service through the market disappears. The market thus fails to provide a good or service for which there is a need. (via Wikipedia)

Like much of classical economics, this model of a world of perfectly rational people acting in their own economic self-interest is simply untrue. Now remember that a model can be wrong, but still be useful. However, in discussing OER this model is neither correct nor useful. It lacks an account of why people volunteer for or donate their time, money, and effort to a range of charitable and other causes, including the creation, improvement, and maintenance of open source software and open educational resources. Theoretically, it’s impossible for anything open to exist at any scale or quality – after all, there is no market incentive for their creation. Yet, somehow, the overwhelming majority of the internet still runs on Linux, Apache, Ruby on Rails, Node, WordPress, and other open source infrastructure. And increasingly, higher education courses are running on an open content infrastructure (i.e., OER).

Unfortunately, while commercial publishers do have a market incentive to invest in the creation and update of educational materials, those incentives are aligned primarily with selling the materials and only secondarily with supporting learning. Much – though certainly not all – of the investment and innovation coming out of commercial publishers over the last decade has been in things like ingenious DRM systems that cause natively nonrivalrous digital content to become artificially scarce so that it can be better monetized; the frenetic pace of creation and release of new editions of textbooks, where the main purpose of the new edition is to undercut the used book market in previous editions; and the creation of digital rental and other “streaming access” type of services where students only license temporary access to content and never own anything, in order to eliminate used markets altogether. Yes, publishers have a financial incentive to create educational materials, but it is not primarily aligned with student learning.

Publishers will also likely come back to the core argument that their production process guarantees “high quality and engaging experiences,” with the subtle implication that other processes either cannot or likely will not. First, this argument is tautological if you have convinced listeners that “high quality” means “the set of proxies for effectiveness that traditional production processes create.” Before there can be any progress made in the educational materials or broader educational technology markets, we have to shed vague notions of “high quality” and have a laser-like focus on effectiveness. When faculty begin basing their adoption decisions on effectiveness rather than publisher, author, or the likelihood that a textbook’s interior photos to win a Pulitzer Prize for Photography, the market will start moving in a direction that better supports learning.

Second, there is a subtle slight of hand that frequently goes unnoticed in the quality argument (even if we assume that quality means effectiveness). The argument goes that student learning will likely suffer because of the probably poor quality of OER. Now, there might be some truth to a similar argument when discussing students’ critical information literacy in their use of resources they discover on the open internet. But who chooses the core instructional resources students will use? Faculty do. If you believe you can honestly argue that faculty don’t have the basic content or pedagogical knowledge necessary to effectively select core instructional materials, then the materials selection process should be very low on your list of concerns for student learning.

How do we get the most current, updated content when we want it?

When it comes to revising and remixing content, OER hold some advantages over the traditional textbook revision cycle. The ability to customize for a specific region or update to reflect recent world events is very academically appealing and can yield more relevant, up-to-the minute content. But who ultimately owns the responsibility for updating and refreshing content? Is it up to each individual instructor? Would department heads be responsible for making the call? How would the quality of these updates be assured? In research intensive disciplines, who is responsible for completing ongoing literature reviews, distilling the best new research, and updating course materials? Who maintains working problem sets over time for disciplines requiring them? And what is the incentive to keep the cycle going? There are some intrepid instructors who would eagerly take this on, but when you consider the implications for content updates at scale, demand can far outstretch capacity. Without clear responsibilities, incentives, quality controls, and a repeatable process for managing content updates, there are substantial costs imposed upon the user of OER for its upkeep.

I believe I addressed all of the questions asked in this paragraph in my concise (!) responses above.

How can we drive down costs for students and for institutions?

This seems to be the most likely question that OER seek to answer. And underpinning this is the presumption that “open” means “free.” But as mentioned above, there are significant costs imposed on both the learner (e.g., pedagogical inconsistency) and the instructor (e.g., material curation and upkeep).

As I have explained at length before, most recently through an analysis of the word “open”s usage in a cluster of interrelated contexts, the definition of open in contexts like “open educational resources” has two critical components – (1) free, plus (2) 5R permissions. There’s a lack of sophistication in the way most people talk about the “free” component of the definition of open, including the op-ed.

All educational resources – whether open or commercial – can be used to support learning only with some effort by the instructor. (We’ll ignore the obvious fact that learners must also exert effort to learn from resources, which is captured in the discussion of effectiveness vs efficacy linked above.) When a publisher releases a new edition of a book and stops selling the old one, forcing an instructor to change texts, or when an instructor chooses to switch from one author’s book to a second author’s book, or when an instructor chooses to switch from commercial materials to OER, there is a significant amount of effort required on the part of the instructor. You may have heard faculty complain about how many “preps” they have this semester. This effort can typically be converted into a cost by using the amount of time an instructor earning a certain amount per hour spent on the task. (I suspect I would disagree with the op-ed author about the relative amount of effort necessary to be ready to teach effectively with commercial materials vs OER. No doubt we would both likely overestimate in favor of our preferred solution.) In this sense, no materials – whether commercial or open – can ever be used without incurring cost. Therefore, in some sense it costs money to use OER in support of learning.

In an entirely different sense, publishers work very hard to ensure that they are able to collect fees from all the students who use their materials. The absolute control with which these rents can be extracted online is one of the main drivers for publishers moving to digital. Legally enforceable Terms of Use can prohibit students sharing account information. The deployment of DRM (plus the DMCA) can make clever ways of accessing materials without paying a crime. There must be a royalty paid for each and every use of publisher materials.

In contrast, by definition OER never require a license fee or royalty to use. You have free, perpetual, and irrevocable permission to retain, reuse, revise, remix, and redistribute OER. Being devoid of any licensing fee or royalty, OER are clearly much more affordable for students than commercial resources. Even at institutions where a small support fee (e.g., $10) is charged per course to provide support to faculty in their use of OER, OER are still significantly less expensive that commercial materials.

Who is responsible for ensuring ADA compliance of OER? How are necessary integrations with student information and learning management systems supported? Who provides technical support to learners and faculty when needed? There is no free lunch, and most OER proponents understand that. That’s why businesses are beginning to sell services around OER, perhaps in anticipation of the day when the foundation, government, and venture capital funding that has kept OER afloat begins to dry up.

The answer to the questions of “who [plays some critical role]” is “anyone who is willing and able.” Often this will be individuals, but I believe there is also significant potential value to instructors in the emergence of organizations that are willing to take on an OER stewardship role – organizations that make it easier for faculty to adopt OER effectively. This belief in the value of such organizations, shared by myself and my partner Kim Thanos, is why Lumen Learning exists. As a rule, Lumen does not create new OER. Instead, we try to provide a simple, supported pathway for faculty members to adopt and teach effectively with OER – including LMS integration, technical support, and many of the functions mentioned above.

But we didn’t create Lumen out of fear that one day foundation, government, or other funding will dry up. If anything, the increasing adoption of policies by governments and foundations requiring that copyrightable materials created by their grantees must be openly licensed (e.g., see the two dozen or so Foundation open licensing policies and this list of seventy-some state and federal open licensing policies) are ensuring that there will be new OER created and shared for a long time to come. Instead, we created Lumen to accelerate the effective adoption of OER – because we believe that OER adoption will greatly improve access, affordability, and learning for students while simultaneously greatly expanding pedagogical freedom for faculty.

OER are a great way to enrich and personalize instruction alongside core instructional resources. But the hidden costs of OER make them a dubious replacement for primary course material, unlikely to deliver substantial savings over proprietary digital solutions in the long run. Low-cost OER make great supplements, but there is real value in core instructional content presented systematically and updated regularly by invested authors. Today’s highest-quality, most in-demand proprietary content is the product of an ecosystem that recognizes and rewards that value. That isn’t likely to change in the future. One good thing about the OER buzz is that it has opened up the conversation about the cost of course materials and our collective responsibility for improving college affordability. We’ll just have to be careful that we’re not sacrificing the quality of the learning experience in the pursuit of lower cost.

I fully agree that “there is real value in core instructional content presented systematically and updated regularly by invested authors.” I simply disagree that the only mechanism for organizing these collective efforts is the promise of royalty payments. Regardless of whether or not the alternative incentive models employed by creators and improvers of OER should be theoretically viable according to standard economic models, these models are viable and they are flourishing. In addition to making great supplements, OER make great replacements for core instructional materials – and the peer-reviewed research on the topic demonstrates that OER are generally at least as effective as the commercial materials they replace. The significant cost savings, comparable student outcomes, and greater pedagogical flexibility facilitated by OER are some of the reasons why dozens of colleges across the country are at this very moment switching their core instructional materials from publisher resources to OER across entire degree programs. These OER-based degrees (or “Z Degrees”), in which general education courses and required courses in the major all use OER instead of commercial materials, can cut the cost to graduate by 25% or more at community colleges. These programs are growing rapidly (most recently with encouragement from Achieving the Dream) and will be important to monitor.

There is indeed an ecosystem of educational materials. Until very recently that ecosystem only contained materials developed by a handful of companies through largely homogenous processes. These processes produced materials that were terrifically expensive, difficult to distinguish from one another, and somewhat draconian in their approach to copyright. Now the ecosystem also includes OER – materials that are developed through heterogeneous processes, are royalty free, are almost infinite in their variety, and provide faculty and students alike with incredible flexibility via open licenses.

It will be interesting to watch the educational materials ecosystem evolve over the next several years. What will the new equilibrium state look like in 2020?


Terrible ideas and brilliant ones can be surprisingly difficult to tell apart.

More often than not they tend to be terrible. So, from a purely statistical perspective, what you’re about to read is likely a horrible idea. That fact notwithstanding, in the great tradition of selfishly publishing less than half-baked ideas on my blog so that I can benefit from readers’ comments and feedback, I present the following little thought experiment for your enjoyment.

Some Context

In which I present a view only slightly more strident than my actual feelings

Every day it becomes more obvious that we are deliberately slowing the advance of science and the useful arts, the pace of that innovation which we so fastidiously revere, for the sole purpose of propping up the expired business models of academic journal publishers.

But we may have finally seen the nail that will seal academic journal publishers’ coffins. Via Stephen Downes yesterday, here’s an article about how to use Tor with Sci-Hub in order to privately and securely read research articles you don’t have legal permission to access. The “you’re not allowed to read this research” genie seems to be completely out of the bottle. The Library Loon shares thoughts about the countermeasures journal publishers will undoubtedly begin to employ against Sci-Hub and, consequently, all other readers of their articles – making them even less useful to scholars, researchers, and other readers than they currently are. It sounds like Napster and the turn of the century music wars all over again.

At least when the music industry began suing downloaders, they could pretend they had artists’ financial interests in mind. But given the thorough intellectual raping and pillaging journals commit against academic authors, stripping them of essentially every right contractually assignable, there will be no sympathy for the journals as their end game plays out. Who exactly are the little guys the journals are fighting to protect as they sue researchers for illegally reading articles that advance their cancer research? Who will the public side with here? Was there ever a worse PR disaster waiting to happen?

If it were ever broadly understood by the public, even the current state of academic journal publishing would be a PR disaster. Let’s be clear – for many decades the academic author never even had a choice. If she hoped to keep her job, she was forced to give away – literally give away – any and all rights to her own work so that journals could charge outrageous sums of money to prevent most people from reading it. Adding insult to injury, the journal then also charged the author to purchase back copies of her own words, which of course were no longer hers but now the sole ‘property’ of the publisher. (And did I mention she also has to serve as a volunteer reviewer for the journal in order to meet her service obligations to earn tenure?) Today, authors have the privilege of not only doing all the research, writing all the words, and being volunteer review labor for the journal, but if they want to retain control over their writing they can also pay the journal $1500 – $3000 per article they publish. Makes you want to write more, doesn’t it?

The entire system is morally compromised and morally compromising. Here’s a modest proposal to speed academic journal publishers’ demise, ease the pain of their passing, and put innovation back on the fast track.

The Thought Experiment

In which I attempt to strain the reader’s willful suspension of disbelief

Strong copyright advocates have long claimed that creative works are “property” and therefore should be afforded all of property’s protections and other considerations under the law (plus whatever additional concessions they could wring out of the hapless congresspersons they lobby). Let’s play that idea out for a moment. Adapting language from Wikipedia for the sake of expediency:

Eminent domain is the power of a state or a national government to take private property for public use. The property may be taken either for government use or by delegation to third parties, who will devote it to public or civic use. The power of governments to take private real or personal property has always existed in the United States, as an inherent attribute of sovereignty. This power reposes in the legislative branch of the government and may not be exercised unless the legislature has authorized its use by statutes that specify who may use it and for what purposes. The legislature may take private property by passing an Act transferring title to the government. The property owner may then seek compensation by suing in the U.S. Court of Federal Claims… Its use was limited by the Takings Clause in the Fifth Amendment to the U.S. Constitution in 1791, which reads, “… nor shall private property be taken for public use, without just compensation.” The Fifth Amendment did not create the national government’s right to use the eminent domain power, it simply limited it to public use.

Think about it for a moment. What privately held property could possibly benefit the public more than the scholarly record? The accumulated knowledge of the researchers of the last 100 years or so (i.e., research articles still under copyright of disseminators as opposed to authors)? Has there ever been a better public use argument for taking private property under the government’s eminent domain power? Honestly, what is the benefit of a road somewhere compared to the scholarly record?

As the Fifth Amendment notes, the law requires “just compensation” when private property is taken in this manner. That would undoubtedly be a large sum of money in this case. Where would it come from? Probably from the budgets of libraries across the country that are already paying outrageous fees to lease temporary access to intentionally crippled digital versions of research articles.

This could be a win-win:

  • Publishers, who are under incredible pressure from alternative distribution models, legitimate open access programs, and guerilla open access initiatives like Sci-Hub, get a major golden parachute for their shareholders.
  • The full scholarly record goes immediately into the public domain, benefiting everyone.
  • Libraries continue to pay out of their acquisitions budgets for some period of time, but now they’re paying for something that is actually usable by students, faculty, scholars, and others.
  • The hundreds of millions of people whose tax dollars supported much of this research, but who have never had a chance to see the results of the work they funded, finally get what they paid for.

Now, you may be asking yourself ‘can “intellectual property” be taken under eminent domain?’ Here’s a lengthy excerpt from a Chapman Law Review article (with a link to an apparently legal copy of the entire article) that makes the case – An Eminent Consequence: Why Copyrights Could Become Subject to Eminent Domain. Reader beware: This is a single article by a single author (who I don’t know), and clearly this is not the definitive word on the matter. However, the argument is intriguing, and the eminent domain taking of the scholarly record by the government appears to be theoretically possible.

There would, undoubtedly, be significant logistical obstacles to overcome in implementing such a plan. (For example, if a foreign publisher’s copyright is formally recognized in the US, does the US government have jurisdiction to seize it under eminent domain?) A plan like this would be anything but simple or straightforward. But think of the benefit to society it could create… Imagine the revolutions in scientific discovery that would be enabled by unimpeded, cross-disciplinary, automated text-mining – something utterly impossible under the current publisher regime. Imagine the increased rate of innovation that would emerge as more and more people had access to cutting-edge knowledge, increasing our ability to solve ever larger and more complex problems at home and abroad. Imagine the humanitarian impact of thousands of volunteers in Wikipedia-like projects freely translating these research results into languages spoken in the developing world. Imagine shaking up the antiquated tenure and promotion policies at universities! (Ok, I admit it – that last suggestion strains my authorial credibility.) As Dr. Seuss might say, you could imagine until your imaginer gets sore and not even have scratched the surface of the possibilities.


Well, there you have it. When you play out this little thought experiment, where does it take you? Does it catalyze a global increase in innovation and quality of life? Does it cause the zombie apocalypse that ends human society? Something in between?


The tl;dr. In many contexts – like open content, open educational resources, open source software, open access, and open data – “open” means “free plus permissions.” But when modifying nouns that aren’t copyrightable – for example, in contexts like “open pedagogy” or “open educational practices” – open necessarily means something else. There are significant costs when we aren’t clear about what we mean by open in different contexts.

A dozen or more years ago I was sitting in a meeting at MIT. There were fifteen or so people from around the world in the room and we were talking about open courseware. At some point the conversation turned to copyright and the incredible amount of time, effort, and resources it takes to review and clear all the material you want to share openly. A participant from China smiled broadly and said something along the lines of “That’s one of the great things about doing this work in China – you don’t have to worry about copyright! Nobody over there cares.” We all laughed appreciatively at his caricaturization of his own culture. As our laughter died down he added emphatically, over a flat stare, “I’m serious. We don’t even think about it.” Our laughter turned to awkward chuckling as we struggled to change the subject.

Matthew Smith has written a really excellent article over on the ROER4D (Research on Open Educational Resources for Development) site titled “Open is as Open Does“. This post is extremely timely for me, given that the #OpenEd16 Call for Proposals just opened yesterday and includes the themes “The Meaning of Open” and “The Ethics of Open.” There are numerous really smart and challenging ideas in Matthew’s post worth responding to, but I’ll focus this little bit of writing on the meaning and ethics bits.

Matthew begins,

As someone who thinks about and funds research on openness in developing country contexts, I’ve often wanted to ditch the word open altogether. It is such a value-laden term, with so many potential interpretations that people attribute whatever meaning they like to it – often with great passion. Then we end up in endless debates regarding effectively arbitrary definitions. Given that any application of “open” to a new social innovation (like open educational resources or open government) is really just a social convention, can we really say that one definition is the right one?

Asking after the universal single right meaning of a word seems like a less than fruitful enterprise, and it’s obviously not what Matthew means to do here – in fact, he’s suggesting it’s impossible. Let me agree with him for a paragraph or two.

There are cultural and contextual differences in meaning of words – what is the correct definition of “flat”? Does it mean “apartment” or not? The answer, of course, depends on when and where you are. Meanings vary by context. Even in the very same time and place, the same word can have multiple meanings which become clear only through their use in context. Is a “sentence” a set of words that convey a complete thought, or the punishment given to a person found guilty of a crime? Yes. Words only have meaningful definitions in context, and so part of our quest for clarity about “open” has to be a scoping of the context we care about. (Just for fun, look at Google’s definition of open. Be sure to click the down arrow at the bottom of the box to get all the juicy details – Google presents 15 major definitions of the term, none of which cover the specific context our current conversation imagines.)

While there is an important sense in which a certain level of ambiguity lubricates our everyday conversation, there are some contexts in which being specific about definitions matters greatly. Context: when the doctor says I need to take 100 milligrams of ibuprofen, it matters a great deal that we agree – with specificity – about what a “milligram” means (and what “ibuprofen” means for that matter). Context: if you spend $1000 for an Apple Macbook, you have an expectation that “Apple Macbook” means something very specific. Context: when the US Department of Labor offers $2B in grant funding on the condition that all materials created with that funding be “open” educational resources, they likely have something quite specific in mind.

Generally speaking, the importance of defining a term-in-context with specificity is directly correlated to the potential negative consequences of persistent confusion about the meaning of the term. I continue to argue – often with great passion – about the meaning of “open” in the narrow context in which I work because I believe failure to create a broad consensus about it’s meaning will have significant negative consequences for society. I’ll come to those in a moment.

Matthew continues,

Critically, what the research [in the developing world context] suggests is that open standards and/or legal permissions are neither necessary nor sufficient for some people to treat the material as open in practice (i.e., engage in the 5Rs practices) to make or do something useful or valuable with that technology or content. This is true particularly in developing country contexts without active copyright enforcement or culture. What the research in the developing world is revealing over and over again is that “free with permissions” can happen through social rather than legal means – it may be based on norms rather than law… “In situations where intellectual property enforcement is either impossible or counterproductive, people frequently behave toward protected content as if it were part of a commons, and as if intellectual property regimes did not exist, or simply did not matter.” (Mizukami & Lemos 2010)

You can now understand why reading this article reminded me of my uncomfortable experience at MIT all those years ago. I fully trust that Matthew is correct about what is happening on the ground in many developing world contexts. However, I don’t believe this reality changes the context specific meanings of open that I recently wrote about in my article on The Consensus Around Open. When used as an adjective to describe specific creative artifacts – like open content, open educational resources, open access research articles, open data, or open source software – the clear community consensus is that “open” means free plus permissions.

Matthew hints toward a path through the confusion later. He notes – and this is critically important – that permission to do something doesn’t guarantee that it gets done:

OER by themselves don’t do anything – they don’t have an impact just sitting in the cloud or on someone’s Raspberry Pi. It is only when they are used in particular ways that change can happen – and it is this change that motivates most people interested in “open” in the first place.

Here’s the intellectual pivot that I believe is important. The context Matthew is really writing about – open pedagogy, open practices, or open educational practices, depending on whose terminology you prefer – is meaningfully different from the context I just described. One is open as an adjective describing characteristics of creative works, which are copyrightable, and the latter context uses open as an adjective describing things people do in support of learning, which aren’t copyrightable. And since they aren’t copyrightable, it makes little sense to try to define them as being permissible or not.

I define open pedagogy as ‘the set of teaching and learning practices that are only possible or practical in the context of the free access and 5R permissions characteristic of open educational resources’ (e.g., here). Those who prefer other terms, like open practices or open educational practices, use different language that means largely the same thing.

Note that my definition of the “open” in open pedagogy doesn’t say how you ended up getting those 5R permissions – it only encompasses the range of things you can do once you have them. There are a number of ways you could end up with these permissions as pertains to a specific creative work you want to use in support of learning:

  1. You could be granted them explicitly by means of an open license,
  2. The creative work you want to use could be in the public domain, meaning there is no need to acquire permissions,
  3. The use you want to make could qualify as fair use / fair dealing, exempting you from the need to acquire permissions, or
  4. You could just not care about the legality of what you’re doing and proceed to act as though you had the necessary permissions when you really don’t, perhaps because you think there’s no way you’ll get caught

Matthew sees a lot of (4) happening in the developing world, and suggests that perhaps the way we define open should be based on what’s happening in the real world, rather than what we are imagining in our towers of ivory:

One alternative approach would be to take a grounded theory approach to the open definition. In other words, we build up a definition based more on what is happening in practice, rather than pre-conceived theory about open. Given the evidence emerging from IDRC supported research, the conclusion would be to focus on openness in practice, what that looks like, how to do it well, and its benefits – regardless of legal or technical status. I see this as the logical evolution of openness: First we define it (arbitrarily), then we research it, and then based on the new evidence, we redefine it.

I see two problems with this approach. First, it would equate open with breaking the law (we can argue about the ethics of this disobedience separately). In the US and other places we have worked extremely hard to demonstrate that open educational resources respect the law, comply with the law, and that it makes sense for government to embrace the principle of openness in many of its functions. If open became a synonym for violating the law, there is no way governments could support openness, no way that open policies could be enacted, etc. I don’t believe that making “open” mean a systematic disregard for copyright moves forward the work any of us are trying to do. (I could probably support a phrase like “guerrilla open” to describe these practices in order to acknowledge their broad practice and characterize them accurately.)

The second problem is by far the larger of the two, and pertains to both the forms of open pedagogy that rely on fair use and those that depend on guerrilla open practices. This is because forms of open pedagogy practiced in the context of a fair use exemption generally must remain private (in order to qualify as a fair use) and forms of open pedagogy practiced illegally generally must remain private (in order to avoid being caught and punished).

When work is done privately – when it is carefully hidden from the public – no synergy is possible. When the individual nodes remain disconnected, no network can emerge. When the giant hides, no one can stand on his shoulders. For example, we now know that Archimedes, in his The Method of Mechanical Theorems, developed several techniques similar to integral calculus. However, because Archimedes’ work was lost, it was nearly 2000 years before similar techniques were rediscovered by Newton and Leibniz. How would the world be different today if the calculus had emerged 1800 or 1900 years earlier? We’ll never know, because this incredible intellectual work never became public until quite recently.

Likewise, by relying on interpretations of open that require teachers and students to develop and perform new pedagogies in private we are no doubt missing out on amazing work that is occurring around the world. The same breakthroughs are likely made by multiple faculty or teachers, only to retire with them because the whole enterprise was illegal in the first place and so they could never speak about them publicly. Or perhaps it all depended on an interpretation of “fair use” that they weren’t willing to take a chance getting sued over, so they kept quiet about it. No doubt, we are more often failing to learn lessons about things people secretly tried that were ineffective, and these ineffective techniques are then replicated in classrooms around the world, too.

By contrast, when open pedagogy is practiced publicly its methods, tools, artifacts, and results can be made freely and openly available to the public. They can be disputed, replicated, evaluated, and argued about. There are shoulders to stand on. There are mistakes to avoid. There is progress to be made.

The opportunity cost of defining the “open” in OER as affordable or free (without permissions), or building the “open” in open pedagogy on a foundation of fair use or guerrilla open, is nothing less than potentially delaying the advance of society. That’s why I’m so passionate about understanding what “open” most powerfully means in each of the various contexts in which we find it.