What Memes Can Teach Us About Applying Educational Research in Practice

Poorly made stormtrooper cupcakes
https://cheezburger.com/8016802816.

Do you know the #nailedit meme? In its most common form:

  1. Someone sees a recipe or craft online.
  2. They try to recreate it.
  3. Things go terribly, comically wrong.
  4. They graciously post the results online, allowing us all to take joy in the degree to which they absolutely #nailedit.

Part of what makes these memes great is that they’re so relatable. Everyone has been there – faithfully (we believe) following a recipe or other set of instructions (looking at you, Ikea), only to have things go horribly wrong. It really can be difficult to get the desired results even when you’re following a step-by-step recipe with illustrations.

But what does that have to do with improving learning?

It’s becoming more and more common to hear people use phrases like “we applying learning science” or “grounded in educational research” when they describe the design of learning tools, activities, assessments, and other content they create. Unfortunately, sometimes these applications of educational research – like recreations of Pinterest recipes – are worthy of the #hailedit hashtag. There are at least four reasons why this happens.

First, it’s not an exaggeration to say that a lot of academic writing is intentionally impenetrable. In educational research, in particular, it feels like the more successfully an author obfuscates plain meanings behind technical jargon, the more likely they are to have their article accepted for publication. I was reminded of this fact this past week, when a friend reached out asking for advice. They had worked very hard to write an article that was clear and easy to understand. However, they had received feedback from the editor of the journal to which they had submitted the article that it ‘didn’t sound academic enough.’ No substantive critical feedback about the topic, method, review of literature, etc. The article was just too plainly written. This culture of “make it sound more impressive” isn’t doing anyone any favors. Actually, in addition to harming those of us who care about using evidence-based practices in instructional design and teaching, it’s actively harming the journals that encourage this kind of behavior by decreasing those same journals’ impact factors.

A second reason that “we apply learning science” can be harder than it seems like it should be is that, generally speaking, articles reporting educational research are not at all like clearly written recipes with step-by-step illustrations. In fact, they often completely fail to provide any concrete guidance about how to apply their findings in the actual design and creation of learning materials and learning experiences. The thought of trying to use these articles to design instruction reminds me of the “how to draw an owl” meme – there’s just not enough information provided to be immediately useful.

 

Step 1. Draw some circles. Step 2. Draw the rest of the owl

Third, as the #nailedit memes so hilariously demonstrate, even when a recipe is broken down into simple, clear instructions, our attempts at following those instructions can fail if we lack the necessary skills. It’s pretty clear what “frost the eyes and other elements of the stormtrooper helmets as shown below” means. But a certain amount of skill is necessary to do that – and no amount of clarity in the instructions can make up for a lack of skill on the decorator’s part. Similarly, we can give faculty the very clear advice that they should “build relationships of support, care, and trust” with students in order to improve the outcomes of their most at-risk students. But there’s a lot more detail that goes into drawing that owl, and a lot of social-emotional skill is necessary for the instructor to do it successfully.

Finally, sometimes recipes simply don’t work even when we follow them faithfully and with skill, because changes are required to reflect the reality of local circumstances. Perhaps some “exotic” ingredients aren’t available where you live, and you need to find acceptable substitutes. Or maybe you live at high altitude, which requires changes in baking temperature, duration, and even ingredients. Some experimentation will be necessary to “get the recipe right” in your environment. And this is true for all of us involved in education – local contexts require local adaptations as we apply evidence-based design, teaching, and assessment practices with our students. Some experimentation will always be necessary to get the recipe to come out right.

This is the reason why I’m so committed to integrating continuous improvement into the instructional design process. Even when we set out to create learning tools and activities and assessments that are grounded in rigorous research, some experimentation will be necessary to bring things together successfully in our specific contexts. As I’ve written about in some depth before, all instructional designs are hypotheses, regardless of how firmly they are “grounded in learning science.” And because initial hypotheses are seldom correct, the learning designs we create should be subjected to rigorous testing, updating, and retesting until they are proven capable of accomplishing their design goals.

We should never say “I followed the process correctly, therefore I don’t need to look at the result.” As a specific example, it isn’t sufficient to say “I followed an equity-centered design process, so I don’t need to confirm whether or not students achieved equitable outcomes. I followed the process! I’m sure it worked out fine!” This mindset reminds me of the Don’t celebrate too early memes.

At the end of the day we should care far less about whether a course design is based on Behaviorism or Connectivism, using the Four Component Instructional Design model or a Problem-based Learning model, or built on a blog or a blockchain. Instead, we should care much, much more about how well the course supports student learning. Fetishizing the process while ignoring the outcome doesn’t help students. Yes, of course we should begin by basing our designs on rigorous research. But that wonderful point at which you finally finish creating a learning tool / course / textbook / &c. is also the point at which the really hard work should begin – making sure that it actually supports student learning.

 

And if you’re not going to take the time to go through the improvement cycles necessary to make sure it’s more effective than what was already available, why did you spend all that time designing and developing in the first place?