The future of open source (and open education?)

People love to analogize / equate open education to open source. There are huge problems with this way of thinking… The one that comes first to mind is that many changes to an open source program can be empirically tested to objectively determine whether or not they improve the program (by increasing its speed, decreasing its file size, etc.) at almost no cost (by recompiling the programs and running automated tests), but many changes to an open educational resource cannot be judged objectively (did changing these words really engage learners more? do these new examples communicate the educational content better?) and even when they can be meaningfully tested, this can only happen at rather high costs in time and resources (e.g., setting up and running usability tests or “horse race” research studies involving enough students to produce statistically meaningful results). Of course, this one difference in the community’s ability to judge whether adaptations should be kept or rejected makes a mountain of difference in our ability to collaboratively develop educational resources rationally and objectively. I could go on about the differences, but they aren’t actually the point of the post.

The point of the post is that, because it can be interesting to think about open education in terms of open source (if you’re careful not to push the analogy too far), Tim O’Reilly’s latest bit of writing called Open Source and Cloud Computing about very near future problems for the open source movement should be required reading for open educators. We will face similar problems in the not-too-distant future, and we should be thinking about them now.

As outlined above, I don’t believe we’ve figured out what kinds of licenses will allow forking of Web 2.0 and cloud applications, especially because the lock-in provided by many of these applications is given by their data rather than their code….

But even open data is fundamentally challenged by the idea of utility computing in the cloud. Jesse Vincent, the guy who’s brought out some of the best hacker t-shirts ever (as well as RT) put it succinctly: “Web 2.0 is digital sharecropping.” (Googling, I discover that Nick Carr seems to have coined this meme back in 2006!) If this is true of many Web 2.0 success stories, it’s even more true of cloud computing as infrastructure. I’m ever mindful of Microsoft Windows Live VP Debra Chrapaty’s dictum that “In the future, being a developer on someone’s platform will mean being hosted on their infrastructure.” The New York Times dubbed bandwidth providers OPEC 2.0. How much more will that become true of cloud computing platforms?

That’s why I’m interested in peer-to-peer approaches to delivering internet applications. Jesse Vincent’s talk, Prophet: Your Path Out of the Cloud describes a system for federated sync; Evan Prodromou’s Open Source Microblogging describes identi.ca, a federated open source approach to lifestreaming applications.

We can talk all we like about open data and open services, but frankly, it’s important to realize just how much of what is possible is dictated by the architecture of the systems we use.

There are a number of ways to understand Tim’s point in the context of open education. If we consider the architecture of higher education, for example, the meaning of “lock-in by data” becomes clear. We can easily reconsider “social network fatigue” (which prevents you from joining too many social networks because you can’t stand to type in all your basic personal details for the nth time) in terms of “gen ed fatigue” by which students are prevented from moving from one university to another because they know credits won’t transfer and they can’t bear the thought of taking World Civilization again. A student’s own data – course grades and accumulated credits that belong to them – are not really any more portable across universities than your Facebook profile is across social networking services. Note that this is not a technical problem, it is an policy problem purposely designed to lock a student into a university. While the Bologna Process has certainly been criticized, it is attempting to make it possible for students to move freely between universities. And as my friend Al is so fond of asking, why shouldn’t a student be able to do their physics at UC, their engineering at MIT, their cyberlaw at Stanford, and their religion courses at BYU?

Of course, saying that LMS vendors try to lock our data into their systems would be another reading of this part of Tim’s article, but one that is too obvious because it is too technical; this is more of an open source problem than an open education problem.

Careful reading and thought will show that Tim’s insightful analysis does indeed point toward many of the problems open education will have to face in the near future. Just please don’t take the open source / open education analogy too far. đŸ™‚

1 thought on “The future of open source (and open education?)”

  1. Hello, David,

    I had yet to see O’Reilly’s post; thanks for the link, and the OER recontextualization (albeit within the limitations of the analogy).

    One line of Tim’s original post also seems worth highlighting: “But peer-to-peer architectures aren’t as important as open standards and protocols.”

    Access to source code, paired with true data portability via open standards, as part of a federated/distributed publishing system of open content — these elements together are necessary to allow open content to flourish. While exposing content for public consumption via a URL is a great start, it still bears more of a resemblance to a textbook model (ie, like AOL or any other content silo) than truly open content.

    RE: “but many changes to an open educational resource cannot be judged objectively” — this is very true, and there is a better than even chance (heck, maybe even a certainty) that many changes to OER’s won’t improve them. But, some will, and given how easy it is to allow for content to move from point A to point B it doesn’t make sense not to allow it. This is also a great argument for pairing free and easy distribution/republishing with tightly controlled permissions/edit rights over the authoritative source of data.

Comments are closed.