Renewable Assessments: Openness, Stigmergy, and Continuous Co-Creation

Having students grade each other’s work is a time-honored tradition among faculty looking to save themselves some time and headache. In addition to appreciating the time savings, many faculty argue that participation in the peer assessment process can actually promote deeper student learning. This is absolutely true when faculty take the time necessary to design the peer assessment experience and supporting artifacts (like rubrics) well. (Though you may not experience a net savings in time after you do all this preparatory work!)

I’ve been thinking recently about peer assessment in the context of renewable assessments. Recall that renewable assessments ask students to produce artifacts of value in the real world (as opposed to disposable assignments, which both faculty and students understand will be thrown away after grading). When one student is providing feedback, encouragement, criticism, and suggestions for improvement to an artifact another student is in the process of making, at what point does that stop being peer assessment and become co-creation? Honestly, I think at this point it’s already crossed over.

We frequently hear people characterize students as (ideally) being co-creators of knowledge rather than mostly empty receptacles into which we pour the dates of wars and the names and weights of chemical elements. I’m a firm believer in that vision, and having been teaching in higher ed for twenty years now recognize how difficult it can be in practice to pull off. But I think there’s something to this idea of combining renewable assessments with peer assessments… it feels like we should lean in to it.

When the renewable assessment process begins with existing open artifacts created by others that can be revised, remixed, and built upon, and we then layer peer assessment on top of that, the result is a special kind of co-creation for continuous improvement – a stigmergy that feeds back on itself in ways that seem like they ought to result in amazing student learning (though whether or not they do is an empirical question we should answer).

Perhaps we should call this activity continuous co-creation?

Let’s dig in a little further into the idea of stigmergy – and particularly how it relates to open. Wikipedia summarizes:

Stigmergy is a mechanism of indirect coordination, through the environment, between agents or actions. The principle is that the trace left in the environment by an action stimulates the performance of a next action, by the same or a different agent. In that way, subsequent actions tend to reinforce and build on each other, leading to the spontaneous emergence of coherent, apparently systematic activity.

Stigmergy is a form of self-organization. It produces complex, seemingly intelligent structures, without need for any planning, control, or even direct communication between the agents. As such it supports efficient collaboration between extremely simple agents, who lack any memory, intelligence or even individual awareness of each other.

Or, as Sun, Marsh, and Onof summarize, “In its most generic formulation, stigmergy is the phenomenon of indirect communication mediated by modifications of the environment.”

A few points. First, these agents communicate indirectly by modifying their environment. Stop and think for a moment – a large part of students’ learning environments are the media (readings, videos, etc.) which they watch and read. (And content makes up an even larger part of the learning environment for online learners, who lack the in-class aspects of the learning environment.) Inasmuch as content is the environment, how can we expect learners to modify their environment when it is copyrighted and locked down under digital rights management technology? They can’t. Only open content is modifiable in this way.

Second, like many self-organizing systems, stigmergy benefits from a larger number of agents participating in the system. For example, go spend a few happy moments playing with this simulation of ants discovering food and bringing it back to the nest. Click setup, then click go and watch them work. They communicate indirectly by modifying their environment – in this case, ants who find food leave an evaporating trail of pheromones behind as they return to the nest. Other ants who cross this trail in their random wandering turn upstream and follow the pheromone trail.

Click go again to stop the simulation and turn down the number of ants to 50. Click setup and go again. What do you notice? It takes a lot longer for the ants to find and harvest the food. Now increase the number of ants to 200 and run the simulation again. What do you notice? You can try the same manipulations (increasing and decreasing the number of agents in the system) with this simulation of birds flocking or other of the awesome NetLogo sims. The lesson is this – more actors result in more interesting and useful behavior emerging more quickly.

Because OER are freely available, they maximize the number of agents who can be influenced by them and the traces left in them by other students.

In summary, commercially published resources both restrict the number of agents who can be influenced by them (by being too expensive) and disallow students from making modifications to them (by being tightly copyrighted and severely DRMed.) In other words, it is impossible for stigmergy to emerge in the context of commercially published resources.

Open educational resources, on the other hand, maximize participation by being free and permit the modifications that allow interesting things to emerge from learners’ interactions with them. In other words, OER make the emergence of stigmergy possible – given the right supporting technology.

And so we’ve connected openness and stigmergy, but what does any of this have to do with renewable assessments? When students are revising and remixing OER (modifying their environment) and those traces are left behind (openly published) for other students to find and build on – that, my friends, is stigmergy. Add peer assessment, aka continuous co-creation, to the mix, and we’ve shortened the wait time between interactions, making the whole system evolve more quickly.

This has a range of implications both for student learning and the sustainability of the broader OER ecosystem.

Comments on this entry are closed.

  • Another great contribution from one of my favorite voices in the field of education. Excellent series of important points, David. The lines between instruction and assessment, between peer review and co-creation, and between authentic assignments and artifacts that can be shared are blurred at best. Thanks for pointing this out, and for teaching me about stigmergy.

  • Brandon Dorman

    I teach in higher education mainly education technology courses – so it’s pretty easy to do authentic assessment/renewable assessments. I even tell my students (who are mainly K-12 teachers) when they first walk in – “If you’re creating something new for this class, you’re probably doing it wrong!”. Thanks to this blog and others that I’ve been reading a lot of over the past year (#openped too), I’m completely redesigning my “Rich Digital Instruction” course. Love the ant simulation analogy.

  • siouxgeonz

    So is this something like figuring out that you should put the sidewalk where the people cut across the grass? Or, maybe you don’t even need a sidewalk?

  • Pingback: Don Gorges Archive of LinkedIn Posts & Links July 27 to September 5 | Don Gorges()

  • Pingback: University digital technology: problems, causes, and suggested solutions – The Weblog of (a) David Jones()

  • Hello,

    I am a Digital Media PhD student working at UW Madison under Kurt Squire. If you have good leads for seeing such implemented systems, I’d love to check them out. Even hints of leads, related writings, or relevant frameworks.

    I am very curious about the digital design and interactions of student learning–more importantly knowledge development and transmission (e.g., quest-based learning class where quests are semi-peer reviewed and generated–personalized learning).