Toward Renewable Assessments

For some time now I’ve been critical of “disposable assessments.” An assessment can be characterized as “disposable” if everyone understands that its ultimate destiny is the garbage can. Take an all-too-typical example:

  • Faculty member assigns student to write a two page compare and contrast essay
  • Student writes the paper and submits it to faculty
  • Faculty grades the paper and returns it to student
  • Student checks what grade they received, briefly peruses any written comments, and then throws the paper away

(This example assumes physical paper, but the principles are exactly the same in the context of assessments submitted, graded, and returned electronically.)

A “renewable assessment” differs in that the student’s work won’t be discarded at the end of the process, but will instead add value to the world in some way. Take, for example, the Murder, Madness, and Mayhem assessments from 2008:

The University of British Columbia’s class SPAN312 (“Murder, Madness, and Mayhem: Latin American Literature in Translation”) contributed to Wikipedia during Spring 2008. Our collective goals were to bring a selection of articles on Latin American literature to featured article status (or as near as possible). By project’s end, we had contributed three featured articles and eight good articles. None of these articles was a good article at the outset; two did not even exist.

Rather than writing essays to submit to their instructor and then throw away, these students contributed good quality research and writing to Wikipedia, where others will be able to benefit from their work for years to come. That’s the core idea between renewable assessments like Murder, Madness, and Mayhem, or Project Management for Instructional Designers, or Blogs vs Wikis, or the DS106 Assignment Bank, or The Open Anthology of Earlier American Literature, and many of the other examples listed by the community here.

In many ways, I think the most powerful part of renewable assignments is the idea that everyone wants their work to matter. No one wants to struggle for hours or days on something they know will be thrown away almost as soon as it is finished. Given the opportunity, people want to contribute something, to give something back, to pay it forward, to make the world a better place, to make a difference. Few right thinking person will invest their heart and soul in work that is academic in the way that non-faculty use the term – “not of practical relevance; of only theoretical interest. The debate has been largely academic.”

It’s no wonder people hate homework so much. They don’t hate learning – they hate wasting time and energy and effort. Try to imagine dedicating large swaths of your day to work you knew would never be seen, would never matter, and would literally end up in the garbage can. Maybe you don’t have to imagine – maybe some part of your work day is actually like that. If so, you may know the despair of looking forward and seeing only piles of work that don’t matter. And that’s how students frequently feel. Your results may vary, but I estimate that the 20 million postsecondary students in the US spend over 150M hours per year on disposable assessments. Every year. Year after year. When time is being used so poorly at such scale, I can’t believe it doesn’t negatively impact society.

Replacing disposable assessments with renewable assessments goes a long way toward re-humanizing education, giving students a reason to care about and truly invest in their work. Without this broader motivating context, students are just waxing cars, sanding decks, and painting fences.


“You promise learn. I say, you do. No question. That your part.”

Research on Renewable Assessments

A change of this magnitude – and really, any change in assessment strategy – deserves to be well understood. So how do we conceptualize research about renewable assessments (and perhaps other forms of open pedagogy)? What kinds of questions are appropriate and useful to ask in this context?

My colleagues in the Open Education Group and I like to say that when you’re considering the outcomes of research on OER adoption, there are “two ways to win.” First, think about three possible outcomes of OER adoption in terms of change in cost and change in learning:

  • Students save money and learn less
  • Students save money and learn the same amount
  • Students save money and learn more

When OER is adopted in place of commercial resources, students save a substantial amount of money. But what happens to learning? Two of the three possible outcomes are “wins” for OER – the same amount of learning for less money is a win, and more learning for less money is a win. Hence our “two ways to win” mantra.

Are there parallels to this set of questions in the assessment context? I believe so. Instead of cost and learning, I think we should begin by examining the value students recognize in their work and the amount of learning these new assessments support. I realize this requires some additional explanation.

[We now interrupt this essay with a brief, unscheduled rant. The overwhelming majority of assessments used by faculty to assign grades to students and, in a very real sense, determine some of their future life prospects, are created by faculty with no training in psychometrics. These assessments are never evaluated in terms of the reliability and validity of their results. (The test item banks and other assessments that commercial publishers provide with textbooks are also almost never subjected to this level of rigor in their design.) To say that the current state of play among faculty is a widespread ignoring of issues of reliability and validity in assessment is to give faculty too much credit. I would wager that over 90% of faculty don’t know these are technical terms in the assessment context, that over 99% of faculty couldn’t properly define the terms in this context, and that over 99.99% of faculty couldn’t describe a reasonable process for establishing the reliability and validity of an assessment’s results. So the first person who objects to the idea of renewable assessments on the grounds that they “might not be as good” as the assessments they’ve traditionally used has some serious explaining to do.]

In the early days of OER adoption, we found that there are ways of adopting OER that actually cost more than using commercial materials. (See Wiley, Hilton, Ellington, and Hall (2012) for an example of how a poorly planned print-on-demand strategy can make OER more expensive than publisher textbooks.) In similar fashion, I think it’s reasonable to anticipate that in the early days of renewable assessment design we’ll see assessments that students find no more motivating than their disposable counterparts. Just as we spent time in the early years of OER adoption research specifically investigating the whether-or-nots and hows of cost savings, we’ll need to spend time in the early years of renewable assessment design specifically investigating the value students find in doing this work, how motivating or engaging they find it, etc. Just writing that sentence I can see there’s still some construct clarification to do here.

As we work to establish common patters for designing renewable assessments that students find significantly more valuable to do than their disposable counterparts (just as we found OER adoption patterns that consistently save students money), we can also ask questions about the assessments are functioning. At a minimum we can begin by asking questions about outcome alignment. For example, should a rubric for grading a renewable assessment differ from the rubric used to grade the disposable assessment it is replacing? If so, how? We’ll have to guard carefully against “construct irrelevance creep” in rubrics for renewable assessments. For example, it might be tempting to award points for a renewable assessment published on YouTube based on how many views or likes it gets. Unless the course context is marketing with social media, this is likely completely irrelevant to the learning outcomes we ought to be assessing in introductory sociology or biology. If I replace a two page compare and contrast essay with a renewable assessment, should it not assess the same (or very highly overlapping) set of learning outcomes? Establishing some degree of comparability in what is assessed and the rigor with which it is assessed will be key to persuading faculty to abandon disposable assessments for renewable assessment strategies.

If you take the (inexplicably radical) position that assessments can be a productive part of learning and not just an autopsy of the learning process, we might also hypothesize better learning outcomes for students whose faculty use renewable assessments strategies. (Establishing the comparability described in the previous paragraph will also be helpful here.) Given a disposable assessment and a renewable assessment that both assess the same learning outcomes, might we hypothesize that students who find the renewable assessment work valuable, and consequently invest more time and effort in it, will display higher levels of mastery on the outcomes we care about? While only a hypothesis, it appears reasonable on its surface. And I have a few years of anecdotal evidence that give me confidence that it’s a hypothesis worth testing.

Looking for a topic for that dissertation or for your next journal article? You might think about attacking questions like:

  • Do students assigned renewable assessments find them more valuable, interesting, motivating, or rewarding than traditional assessments? Why or why not?
  • Do students assigned renewable assessments demonstrate greater mastery of learning outcomes than students assigned traditional assessments? Why or why not?

And what do you suppose will be the result of research into these and similar questions? Again, going back to the OEG mantra, there are two ways to win:

  • Assessments that students find significantly more rewarding to do that result in lower levels of mastery,
  • Assessments that students find significantly more rewarding to do that result in the same level of mastery, and
  • Assessments that students find significantly more rewarding to do that result in higher levels of mastery.

Renewable Assessments and Open

Open licenses allow faculty and students to revise and remix materials (both content and assessments) in a broad range of ways. As you look through the examples of renewable assessments above, you will see that many of them involve revising and remixing – demonstrating that renewable assessments are enabled by the 5R permissions granted by open licenses. (It’s true that a student could do a renewable assessment completely “from scratch,” but that doesn’t appear to be the way they’ve worked to date.) In other words, “open” makes possible renewable assessments that would otherwise be illegal. This is why I think renewable assessments are the best examples of open pedagogy we have now. You might argue that a student could use a range of copyrighted materials in a homework assignment and claim it was a Fair Use. However, I suspect many people would hesitate to share this kind of material broadly given the ambiguities of Fair Use, which kind of undermines the “give something back” philosophy underlying renewable assessments. And without providing the permissions for others to revise, remix, build on, and improve the work, it’s difficult to really call it “renewable.”

Students are the authors and, thanks to Berne, the copyright holders of the homework and other artifacts they create as part of their education. There is no morally or ethically appropriate scenario in which faculty require students to openly license their homework or other creations as part of an assignment. However, faculty can espouse the benefits of openness and advocate for students to license their works under a Creative Commons license. This advocacy will be significantly more effective (and less hypocritical) if the faculty member is using OER in the class and can point to OER they have created and shared.

If some portion of the over 150M hours higher ed students currently spend on disposable assessments can be spent on renewable assessments instead, and if some portion of those students choose to openly license their work, questions about the sustainability and maintainability of the OER ecosystem can be answered. Over time, we could see a transition to a place where the majority of content and assessments a learner encounters were created “by students, for students” with editorial support from faculty (what we used to call “grading”). What an incredible, inspiring, sustainable world that would be…

Let’s create and share more renewable assessments as OER – open renewable assessments – that others can adopt, improve, and share broadly. And let’s get this research going.

Comments on this entry are closed.

  • Pingback: The unKNOWN()

  • Yes! I was just talking to someone recently about whether or not there was such research already, and I wasn’t finding much. I was trying to make a case for why faculty should incorporate renewable assignments, and I wanted some data to show that they are pedagogically effective. But at the time I couldn’t find any. It is definitely needed, and this is on my agenda for future research. Could be a year or two, but I hope to get some done as soon as I can!

  • I am working on a research project right now where we might be able to investigate something along these lines. I am reading your material as background information for questions I am bringing up to my team.

    In general I think it is about providing a solid toolset for students to use and then to create standards in the industry for “renewable assignments” production and delivery. So many people are building so many products in this realm currently, but having a central place for work will be the deciding factor for wide spread adoption and success.

    This niche market needs to become focused and cohesive.

    I immediately think of the Adobe, 3D animation, and game programming ecosystems–even email really. It has gotten to the point where individual providers matter less, as the toolsets and formats are more ubiquitous and normalized, which allows for greater communication between the diverse range of products and users.

  • skydaddy

    I’ve been an advocate of authentic assessments for a long time. This pay-it-forward approach takes things to a new level. I can see a potential for pushback in two areas.

    One, concerns over academic integrity. “You want me to TELL them to use other people’s work, and let other people use theirs???” (Obviously, citing sources correctly deals with this, but that’s a predictable initial reaction.)

    Two, complaints like, “I don’t have the time. Using the publisher’s test banks is so much easier.” (When I point out to them that the test banks are available online for a small sum, they sigh and have the students take the tests in the Test Center with a lockdown browser…)

    But frankly, those folks aren’t generally receptive to innovation anyway.