The Dance of the Not Commons

Last October Doc Searls gave the Ostrom Memorial Lecture for the Ostrom Workshop at Indiana University. In his lecture he carries on what I believe to be an incredibly unfortunate tradition. I’ll call it, “the dance of the not commons.” It’s an incredibly simple dance. Step one (right foot): state that something (e.g., the internet, knowledge, OER) is a commons. Step two (left foot): immediately enumerate the many ways that the thing you just called a commons is totally, completely, orthogonally different from a commons. Here’s the core of the dance, with my color commentary in parentheses.

In economic terms, the Internet is a common pool resource; but non-rivalrous (he has to say this because common pool resources are by definition rivalrous) and non-excludable (he has to say this because common pool resources are by definition excludable) to such an extreme that to call it a pool or a resource is to insult what makes it common (not only does the internet fail the test of being a common pool resource, it does so to the most extreme degree imaginable)….

Not understanding the Internet can result in problems similar to ones we suffer by not understanding common pool resources such as the atmosphere, the oceans, and the Earth itself (how can we talk about the dangers of not understanding common pool resources when the argument begins by taking something that is clearly, blatantly, obviously not a common pool resource, and calling it one?).

But there is a difference between common pool resources in the natural world, and the uncommon commons we have with the Internet. See, while we all know that common-pool resources are in fact not limitless (because by definition they are excludable and rivalrous) —even when they seem that way—we don’t have the same knowledge of the Internet, because its nature as a limitless non-thing (because the internet is non-excludable and non-rivalrous) is non-obvious (apparently!).

Do you see how awkward the dance is? How impossible it is to do this dance without stepping all over yourself?

As originally conceived, commons formed around rivalrous, excludable, scarce resources – aka common pool resources. Much of the scholarship around commons focused on governance and other collaborative methods of insuring that the scarce common pool resources around which commons communities formed were not destroyed or depleted.

But then a terrible choice was made.

In the first decade of our new millenium, Elinor Ostrom and Charlotte Hess—already operating in our new digital age—extended the commons category to include knowledge, calling it a complex ecosystem that operates as a common: a shared resource subject to social dilemmas. They looked at ease of access to digital forms of knowledge and easy new ways to store, access and share knowledge as a common. They also looked at the nature of knowledge and its qualities of non-rivalry and non-excludability, which were both unlike what characterizes a natural commons, with its scarcities of rivalrous and excludable goods. A knowledge commons, they said, is characterized by abundance.

Why insist on using the same term – commons – when the nature of the underlying phenomenon, and the problems that its nature gives rise to, are so completely, fundamentally different? Yes, “natural commons” and “knowledge commons” as they’re defined above both deal with “a shared resource subject to social dilemmas.” But insisting on referring to the social issues surrounding resources that are non-excludable and non-rivalrous (which in every other circumstance we would call “public goods”, not common pool resources) – resources which are abundant instead of scarce – as being a “special kind of commons” is like insisting on referring to an automobile as a special kind of carriage – a horseless carriage. Yes, both are “conveyances that move people from one location to another.” But insisting on using the language of horse and carriage to describe an automobile creates all kinds of intellectual problems for those who insist on doing it. The choice to use the wrong language causes us to also use the wrong mental frameworks, which leads us to try to solve problems that don’t actually exist instead of the very real problems that do.

The problems the open education community faces with regard to OER are not the problems of common pool resources – problems of overuse and depletion that we solve through shared governance and accountability. There is no sense in which the open education community needs to form a governance committee that carefully limits public access to the textbooks produced by OpenStax in order to make sure there’s always enough OpenStax to go around. That’s just not a thing. OER are not a common pool resource and the community of creators and users that have formed around them are not a commons.

The problems we face with OER are the problems of public goods – issues related to under-production and free-riding. The world needs much more OER. But what individual or organization would spend the time and effort necessary to make OER when they will just be given away for free, and there will be no opportunity to recover the investment of time and effort? And why would anyone ever pay the creators or maintainers of OER, when you can legally use OER for free? (When’s the last time you personally donated to another person or organization to support their creation and maintenance of OER?)

If you think OER are a commons you are focused on solving the wrong problems – problems that don’t actually apply to OER. OER and the community of creators and users that have formed around them are a public, not a commons. We need to wake up and start solving the actual problems associated with OER – public goods problems – instead of wasting our energy doing the dance of the not commons.

Reducing Friction in OER Adoption

Last week I promised I would write a few posts about reducing friction with regard to OER. When I use the phrase “reducing friction” in this context, I mean taking things that are needlessly difficult and making them much easier. In last week’s post I talked about how we’re making it ridiculously easy for students, faculty, and others to contribute to the maintenance and improvement of OER. In this post, I want to talk about making it ridiculously easy for faculty to adopt OER.

“How much easier could it be?” you might ask. “You find a PDF of an open textbook for your class, you upload it into your LMS, and you’re done!”

While we could make a long list of the ways reality could be improved, the reality today in US higher education is that many faculty rely heavily on learning materials in their teaching. These materials frequently include some integrated configuration of instructional content, quizzes, homework systems that provide infinite practice, automatic grading, and immediate feedback, teaching helps like Powerpoints and pacing guides, and analytics tools that give faculty some view into how their individual students (and course as a whole) are doing. Over the years I’ve seen over and over again that many faculty are understandably hesitant to walk away from these constellations of supports and instead adopt a static PDF open textbook in order to save students money. It’s not easy to swap your old textbook for a free PDF and then try to replace all the things publishers were providing you by yourself. Many faculty understand that systems that provide students practice with immediate, targeted feedback support better student learning, and that analytics and related tools can help them be more effective teachers. None of these technologies are things a “normal” teacher is in a position to reproduce for themselves.

Of course there are always “early adopters” who love trying everything new, seem to have boundless energy and enthusiasm, appear to feel immune to the pressures normally associated with tenure and promotion, and quite often do amazing things. They’ll just write their own LMS, practice system, analytics, or other replacement tools themselves. Or remix existing open source tools to meet their needs. But, relative to the total number of people teaching in colleges and universities, this early adopter group is very, very small. If we want to facilitate a wide-scale shift away from traditionally copyrighted materials to OER, we have to meet these other faculty – the overwhelming majority of faculty – where they currently are.

A Wide-Scale Shift

And just to be clear, I do want to facilitate a wide-scale shift away from traditionally copyrighted materials to OER. I still wholeheartedly believe what I wrote in a series of grant applications to the Shuttleworth Foundation beginning back in 2012 (the Shuttleworth Foundation generously provided some of Lumen’s initial funding and still owns a stake in the company):

Education is more important than ever before. Nothing else can do as much to promote happiness, prosperity, and security for individuals, families, and societies. And while many novel and useful experiments are occurring outside formal education, the degrees, certificates, and other credentials awarded by formal institutions are still critically important to many people….

My long-term goal is to create a world where OER are used pervasively throughout primary, secondary, and post-secondary schools. In this vision of the world, OER replace traditional, expensive textbooks for all primary, secondary, and post-secondary courses. Organizations, faculty, and students at all three levels collaborate to create and improve an openly licensed content infrastructure that dramatically reduces the cost of education, increases student success, and supports rapid experimentation and innovation in education.

The history of innovation is, in many ways, a graveyard filled with the incorporeal corpses of ideas that early adopters loved, but that languished and ultimately died because they could not make it across the chasm to reach the rest of us. This is still a very real risk for OER. Will OER find mainstream acceptance and adoption? It’s easy to believe that it already has if you spend most of your time inside the open education community bubble. But at OpenEd19, Jeff Seaman of the Babson Survey Research Group (which has conducted nationwide surveys of higher ed faculty attitudes toward OER since 2014) described a likely future in which “OER will remain a niche-only presence (or even worse)”. According to Babson’s most recent survey, about 2/3 of faculty reported having no plans to even consider using OER in the next three years. Only 6% said they plan to use OER over the next three years. We’re clearly not anywhere close to achieving wide-scale adoption.

Why?

One of the less obvious reasons innovations can struggle to make it across the chasm is because purists among the early adopters can’t stomach the transformation that is necessary for an idea to find mainstream adoption, and they fight actively against it. One vivid example of this kind of pushback comes from the history of computing.

From the Command Line to the GUI

Once upon a time, computing was all about the command line. To use a computer, you had to know commands, and you had to type them in at the prompt. Not only did you have to know the commands, you had to know the “flags” or command line arguments to use with them. Something like  rm -rf * . Often, when you wanted to use a piece of software, you had to get a copy of the source code and build the software yourself. This process was often fraught with difficulties. Linus Torvalds, the creator of Linux, referenced these difficulties in his email announcing the first availability of Linux source code:

Are you finding it frustrating when everything works on minix? No more all-nighters to get a nifty program working? Then this post might be just for you 🙂

Later in the email he would claim that Linux “has been known to work. Heh” and that you would “need to be something of a hacker to set it up.” These acknowledgements of how difficult it could be to get the software running were actually meant as enticements to get people engaged. Of course commercial software existed that did the same things Linux did, but Linus wanted to write an operating system himself – one that other people “might enjoy looking at it and even modifying it for their own needs.” This was a common vision of the world of computing at the time – it was a place for people with expertise and time to spend. Some of the resentment people feel about this attitude toward computing is captured in a meme that makes the rounds each year.

Other people eventually had a vision of taking computing “to the masses.” They understood that normal people were never going to spend the time necessary to learn commands, arguments, and how to fight with a computer in order to compile their own software. If the power of computing was going to reach everyone, a much, much easier to use model was needed. Enter the Graphical User Interface, with its windows, icons, pointing, and clicking. In his essay In the Beginning Was the Command Line, famed sci-fi author Neal Stephenson summarizes the backlash against the GUI:

The introduction of the Mac triggered a sort of holy war in the computer world. Were GUIs a brilliant design innovation that made computers more human-centered and therefore accessible to the masses, leading us toward an unprecedented revolution in human society, or an insulting bit of audiovisual gimcrackery dreamed up by flaky Bay Area hacker types that stripped computers of their power and flexibility and turned the noble and serious work of computing into a childish video game?

It was a huge leap going from this:

page fully of code compile errors

to this:

old skool mac user interface

I’m glad Apple, Microsoft, IBM, and others pushed forward with developing, commercializing, and making GUI-based operating systems widely available. While the power users are still doing amazing things at the command line even today, the overwhelming majority of the people who use a desktop or handheld computer today use a GUI. And even though it is admittedly less flexible than working at the command line and writing and compiling your own software, people still accomplish amazing things working through a GUI. One might rightly claim that the GUI democratized access to the power of computing.

From OER to OER Courseware

There are some people in the open education community who don’t believe OER should be embedded in courseware. One of their complaints is about the degree to which embedding OER in courseware may make it more difficult for faculty or students to engage in the 5R activities. I completely agree that it is more difficult in this context. When you’re just using an open textbook in a remix-optimized platform, you can change anything anywhere with minimal consideration. When you’re using OER courseware in a learning-optimized platform, and all the content is individually aligned to learning outcomes, practice opportunities, formative assessments, and summative assessments, then changes you make to content have to percolate through the whole system. A failure to follow through with the cascade of required changes can lead to highly undesirable outcomes – like removing all the content and practice related to a topic, but forgetting to remove the associated questions on the quiz. Or breaking the outcome alignments that enable analytics tools to make study suggestions to students or outreach suggestions to faculty.

I see this as exactly the issue the software community struggled with when thinking about the GUI vs the command line. Yes, there’s more flexibility available if you put OER into a remix-optimized platform. But when all the OER, homework, supplementals, exams, analytics, and other tools are outcome aligned and well integrated in a learning-optimized platform, OER courseware is significantly easier to adopt than a PDF.

OER courseware can reach the overwhelming majority of faculty where they currently are – while simultaneously improving student outcomes and dramatically reducing costs. As I explained at some length a few years ago, this is the three-part framework we use to think about our impact: impact = (learning gains) x (cost savings) x (number of students). Creating OER courseware allows us to increase all three components of the framework simultaneously.

 


Compiling and running UNIX based code on Mac OSX by Mustafa from https://stackoverflow.com/questions/4518459/compiling-and-running-unix-based-code-on-mac-osx is licensed CC BY SA.

Mac UI image from https://en.wikipedia.org/wiki/History_of_the_graphical_user_interface#/media/File:Apple_Macintosh_Desktop.png is unlicensed.

Reducing Friction and Expanding Participation in the Continuous Improvement of OER

I’m going to write a post or three about some of the friction that exists around using OER. There are some things about working with OER that are just harder or more painful than they need to be, and getting more people actively involved in using OER will require us to reduce or eliminate those points of friction.

I’ve been writing about continuous improvement in the context of OER for a few years now. To date, I’ve written about and worked on reducing the friction involved in a relatively centralized model for continuous improvement of OER – a “top down” approach, if you will:

Today I want to write about the other side of continuous improvement – a complementary, “bottom up” approach to facilitating broad participation in the continuous improvement process based on individual’s experiences as opposed to data analyses.

It’s a well established principle in open source software, open content, and other volunteer settings that when it’s hard to contribute, not many people contribute. When it’s easy to contribute, more people contribute. In fact, one of the most important keys to unlocking participation by a community is removing any friction they might experience in the process of participating.

Stop and think for a minute about how the process of suggesting an improvement to an educational resource typically works. It generally happens in one of two high-friction ways. In the first model, you send an email to an errata@publisher.com or errata@oerpublisher.org email address. In that email you have to try to describe exactly where the improvement should be made, probably by including a URL or page number, followed by a description of where on the page the change is supposed to go (“in the second sentence of the seventh paragraph…”). Then you can finally describe the specific change you think needs to be made. Or perhaps the provider wants you to use their ticketing system, and you end up in a piece of software like Zendesk. Once you figure out how to create a ticket, then you can write one up that includes all the information described above (you might even be asked to do some tagging of your ticket). Either way, if they choose to make the improvement you suggest it will likely be months or years before students in class benefit from your suggestion since it will be rolled into the next edition.

In other words, there’s a ton of friction in this process. And because it’s so painful, countless suggestions are never made that would have been made if the process were easier.

This semester at Lumen we’ve launched a continuous improvement pilot in which I believe we’ve removed just about all the friction that’s possible to remove from this process. Here’s how it works:

  1. There’s a new button at the bottom of every page of content. It says “Improve this page.”
  2. When a student or teacher or other user from the public web clicks the button, they’re linked directly to a Google Doc which includes all the content from the page. The Google doc is shared publicly and has Track Changes turned on. So you can just begin typing or commenting immediately, and your suggestions are highlighted and tracked.
  3. You’re done making your suggestion!

How easy is it to suggest an improvement to OER now? Faster than 30 seconds easy!

The pilot is active in three courses and we’re already receiving great feedback. Much to our excitement many of the suggestions appear to be coming from students. And they’re sending everything from spelling errors they catch to suggestions about how to make course content more inclusive.

The awesome folks on Lumen’s continuous improvement team have developed some tools and workflow that allow us to track the amount of time it takes us to vet these suggestions and get them implemented in the canonical copy of the courseware.  We’re averaging well under 24 hours from the time a suggestion is made until it’s vetted and implemented, and we think we can continue to be that responsive even as the number of suggestions grows after the pilot.

And here is where Lumen’s model particularly shines – since our courseware is embedded in the LMS via LTI (rather than copied and pasted into the LMS), all these improvements are immediately available to everyone using the courseware the instant we make them. The OER is literally getting a little better every single day – benefiting both the teachers and students who have formally adopted in their LMS as well as the informal learners who access our OER on our website. That’s the beauty of transclusion – the OER embedded in the courseware and the OER published on our public-facing website are the same copy of the OER. Update once, improve everywhere.

So far the suggestions people are making aren’t something you could openly license and attribute – they’re either high-level ideas or lower-level issues like fixes to spelling or grammar (i.e., not copyrightable works that can be licensed and attributed). So to make sure people are recognized for their efforts to make things better, we’ve created a new Acknowledgments section on the About This Course page in the pilot courses, and we’re adding the names of everyone (faculty, student, and otherwise – it doesn’t matter who you are) who uses their name when making suggestions that we integrate into the courseware.