badges learning analytics open content open education politics research sustainability textbooks

2017: RIP, OER?

I recently blogged about the Apple announcement and how it amounted to publishers ceding the “traditional” textbook market (whether print or digital) to OER makers. One way to interpret that concession is as a win for open education. And it is a win – temporarily. Another way to interpret the concession by publishers is to see it as electronics companies ending production of VCRs and doubling down on DVD players.

In my previous post I asked, “If video-based, multimedia-rich, interactive textbooks are only worth $14.99 to the big publishers, what are relatively static, text-based books with a few photos worth to them?” Think about that for a minute. Sure, there are “traditional” OER textbooks available for free. But when you could have video, multimedia, simulations, and interactive assessments for $15, why would you take a traditional book (whether print or video) even if it is free?

Secretary Duncan’s Digital Learning Day challenge that the entire US move away from print to digital curriculum by 2017 may or may not be taken up by every K-12 and post secondary school in the country. But it will be taken up by many of them. How will our beloved OER (90% text, 9% still images, 1% video) compete against what the publishers are turning out then, especially if the prices stay in the teens?

It reminds me of the early days of the web. Back in the early 90s, anyone who could figure out the View Source command could make webpages. And we all did. But in the mid/late 90s when somebody figured out how to use Perl to make Apache talk to MYSQL, the web changed forever. Sure, folks were free to keep making the same old dull, non-interactive websites they always had. But no one did. Ask yourself: Of the websites that you use every day, how many of them have a database on the backend? Answer: Every single one, I bet. Overnight the whole web went the way of the programmer, and the expertise required to meaningfully participate (in the sense of Program or Be Programmed) rose dramatically.

The publishers want to make sure the same thing happens to content.

You have to admit that some of the things the publishers are working on are both cooler and better than almost everything that currently exists in the OER space. Can you name a single OER project that does assessment at all (and I don’t mean PDFs of quizzes)? Can you name one that does diagnostic assessment or handles mastery in any meaningful way? We’ve narrowed the entire field of OER down to CMU OLI, Khan Academy, and possibly Thrun’s new stuff. Now, can you think of one of these three that openly licenses their assessments and the engines they run them on? No.

Open education currently has no response to the coming wave of diagnostic, adaptive products coming from the publishers. To the best of my knowledge there is no one really working on next gen OER – OER that are interactive, simulative, really rich with multimedia AND combined with OAR that drive diagnosis, remediation, and adaptation. There’s certainly no one funding next gen OER. And believe me – if it took $100M to get the field to where it currently stands in terms of relatively static openly licensed content, it will take at least that much investment again over the next decade for the field to do something truly next gen.

Because this stuff costs so much to do, if no one steps up to the funding plate the entire field is at serious risk. Much has been written about 2012 being “the year of OER.” Let’s hope it’s not the year OER peaks. We need brains, energy, and funding on the next gen OER/OAR problem NOW.

learning analytics open content

Openness + Analytics: Khan Academy Follows CMU OLI Toward Next-Gen OER

I frequently describe openness and analytics as chocolate and peanut butter – both are tasty individually, but together their synergy is truly remarkable. Until recently we only had one example – CMU’s OLI – where this synergy was really running at full steam: openness providing permission to make improvements to curriculum and analytics providing empirical evidence about what changes are needed. (Note that neither the permission nor the evidence alone are nearly as powerful as the two together.) CMU OLI also leverages openness to increase the number of students using their material, which in turn generates more data, which in turn enables more powerful analytics, which in turn leads to better material, etc. CMU OLI’s openly available research shows the progress they’re making on using openness and analytics to improve student learning.

David Hu’s recent and awesome post How Khan Academy is using Machine Learning to Assess Student Mastery adds Khan Academy to the very short list of organizations really working hard at doing this well. Kudos to David, Sal, and everyone involved. They appear to be squarely in the openness/analytics feedback loop.

Next generation OER, or whatever you want to call it, is not just about publication. It’s about continuous improvement – that little bundle of philosophies and approaches that has revolutionized just about every large-scale field of endeavor besides education.

The next generation has a few problems to solve before they grow up completely, though. First, there is currently no meaningful way to reuse, revise, remix, or redistribute the assessments used by CMU OLI or Khan Academy. (I’ve addressed open assessment previously.) One first step that could be taken down this path is to make assessments embeddable like YouTube videos, with full analytics about use of the embedded instance available to the embedder. Even that tiny step would be huge headway, but would not address 2 of the 4Rs (revise and remix).

Second, both these initiatives are generating huge amounts of data which could be deidentified, aggregated, and shared with the community under open terms. Have you ever tried to teach a course on learning analytics? When you do, you’ll suddenly realize that there are precious few places you can go to get access to an education-related dataset of the size you need to really practice analytics techniques in a meaningful way. Contrary to popular belief, this can be done legally today under the terms of FERPA.

CMU OLI and Khan Academy are clearly out of the egg, but won’t be out of the nest until they make meaningful headway on these problems. But having said all that, it’s wonderful to see some innovation and forward progress in the OER world.

learning analytics open content visualizations

Learning Analytics: Time Series Visualization

As part of my work on the NGLC-funded Kaleidoscope Project I’ve been thinking about practical learning analytics. Why “practical”? My goal with practical learning analytics is to provide access to data in ways that an average teacher, with no special training, can leverage in order to help her students succeed. This is, of course, an extremely tall order.

As I began to mull over some common conventions that teachers could interpret without training (e.g., time flows left to right, scores move higher and lower) I realized that there’s already a tool available that provides visualizations like this – the Google Motion Chart Gadget.

This interactive visualization (play with it here) shows “visits” on the x axis and “current score” on the y for about 65 students over the first seven weeks of a course. (You can visualize up to 4 variables simultaneously with the GMCG, but I wanted this demo to be easy to understand.) Watching the data move through time makes it extremely simple to find students whose grades are dropping (as their circle drops), students whose course activity is waning (as their circle slows or stops in its left to right motion), etc. When you see something “interesting,” you just pause the viz and mouse over the circle in question to get the student’s ID so you can then reach out to motivate, offer help, etc.

Now, you might think that it’s quite simple to generate a viz like this since the GMCG provides the framework. However, try asking your LMS how many times a student logged in during week 3. Or what a student’s grade was at the end of week 5. Your LMS doesn’t actually keep this data at all (or anywhere that you can get to it). So if you want day by day or week by week granularity you’re forced to hack your way around by generating reports daily and calculating deltas to get the day’s activity. Even if your LMS provides a “1 day” report, you can only get 1 day – not all of the “1 day”s in the semester. So you’re still left downloading reports every day and combining them by hand (or script) into the form you need. And since you want to provide daily updates to this viz, that’s actually Ok, except that… this is extremely tedious to do since most LMSs provide no interface for pulling out the necessary data programatically (i.e., you’ll be pulling these reports by hand). And if you want to provide this view to teachers in very nearly real-time (which should absolutely be our goal), you’re consigned to pushing this button by hand at the same time every day – like the fellow on Lost who had to push the button every 108 minutes in order to save the world.

Anyway, lots more to unpack here, but I wanted to post this (1) because I think it’s a great approach that demonstrates how a simple visualization can be made to serve learning analytics purposes, (2) to send a shout out to my Kaleidoscope Project friends, and (3) to complain yet again about how difficult LMSs are to work with. Hopefully someone will find this work inspiring and build something even better and more intuitive.