RISE and Instructional Design

Matt Crosslin has posted a thoughtful response to our RISE article from last year. An open implementation of RISE was recently published in the Journal of Open Source Software. Since Matt took the time to engage so thoughtfully, I wanted to respond in kind. (Also, it’s a breath of fresh air to write a little about instructional design… it’s good to get back to your roots.)

[T]he bigger concern with the way grades are addressed in the RISE framework is that they are plotting assessment scores instead of individual item scores.

Actually, we’re using neither individual items nor the entire assessment score. We’re using testlets, small bundles of individual outcome-aligned items. On average, we’re looking at a group of four individual items per individual learning outcome. This gives us  better construct validity than a single item, while avoiding the many problems you correctly identified with using an entire assessment.

The biggest concern I have with the RISE framework really comes here: ‘The framework assumes that both OER content and assessment items have been explicitly aligned with learning outcomes, allowing designers or evaluators to connect OER to the specific assessments whose success they are designed to facilitate’…. To explicitly align assessment with a content is not just a matter of making sure the question tests exactly what is in the content, but to also point to exactly where the aligned content is for each question. Not just the OER itself, but the chapter and page number…. [I]f you could actually compare the grades on individual assessment items with the amount of time spent on the page or area that that specific item came from, you might be on to something.

One of the reasons we published this framework is to show people the power that comes from doing good (and hard) design work. Matt’s absolutely correct that RISE analysis is quite opinionated about the kind of course it can be used with. You really do need to have every individual item outcome aligned. Your algorithm for building assessments from an item pool needs to be outcome-aware to insure sufficient coverage. Outcome alignment with content needs to be done at the individual page level. &c. The courses we’re using RISE analysis with meet all these criteria. Hopefully, people will look at RISE and the continuous improvement work it enables and say, “I’m willing to put in the design work if that’s part of what I get in return.”

If you could group students into the four quadrants on each item, and then compare quadrant results on all items in the same assessment together, you could probably identify the questions that are most likely to have some kind of issue. Then, have the system send out a questionnaire about the test to each student – but have the questionnaire be custom-built depending on which quadrant the student was placed in. In other words, each learner gets questions about the same, say, 5 test questions that were identified as problematic, but the specific question they get about each question will be changed to match which quadrant they were placed in for that quadrant.

This is a super interesting idea. My thinking to date has revolved around engaging faculty in the continuous improvement process, and I’m hoping to blog about this later this week or early next. But I definitely want to consider the possibilities with this approach.

My idea of a well-designed course involves self-determined learning, learner autonomy, and space for social interaction (for those that choose to do so). I would focus on competencies rather than outcomes, with learners being able to tailor the competencies to their own needs. All of that makes assessment alignment very difficult.

That doesn’t sound that different from my idea of a well-designed course. In the ID world I fear there’s a sense of (false) dichotomy between content and assessments that are well-designed and well-aligned on the one hand and spaces of self-determination, autonomy, and society on the other. True, these have historically been two very different ways of thinking about course design. But why can’t you provide well-designed and well-aligned content and assessment as a foundation that anticipates these other activities? Nothing says the quizzes associated with these core course materials have to account for the majority of students’ grades – other assessments that invite students to exercise more autonomy can be weighted more heavily. I believe there’s more room for bringing together a diversity of instructional design approaches than we’ve sometimes been able to see in the past. I’m hoping to write about this more in the future, too.

Matt’s response makes it clear that I should also do more writing about the kind of instructional design that RISE assumes. (For example, in addition to all the alignment issues discussed above, it only works when all of a course’s content is openly licensed – otherwise, you can’t fix any of the problems you find.) I’ll try to work that in to my upcoming post about engaging faculty in continuous improvement. That post is getting longer by the day…