TULIP: the Theoretical Upper Limit of Impact of Products

Today and tomorrow I’m at the EdTech Efficacy Research Academic Symposium in Washington, DC. The conversations here have been wonderful and have reminded me of something…

For many years, several friends and I have argued about the following question:

After accounting for all other differences – differences in a student’s age, race, gender, income, and prior academic success; differences in school environments; differences in teachers; differences in support available from friends, family, and other out-of-school sources; &c. – what is the theoretical upper limit on the impact a specific textbook, digital learning platform, or other edtech product can have on educational measures we care about (e.g., final grade, completion rate, time to graduation, satisfaction, etc.)?

If we don’t have a notion of the maximum potential impact these kinds of tools can have on measures we care about, how can we judge their effectiveness? For example, if the upper bound is +0.43 letter grades then we would interpret a product achieving a lift of +0.2 letter grades in one way. But if the the upper bound is actually +1.7 letter grades, we would interpret that same lift of +0.2 letter grades in an entirely different way.

While it’s interesting – and even useful – to compare the measures (like final grade) associated with different products, it feels like this work is ungrounded in a way that unsettles me.

I have some thoughts on the topic, but right now am just putting this out there and wondering what other people think…

3 thoughts on “TULIP: the Theoretical Upper Limit of Impact of Products”

  1. I don’t believe your question is answerable as framed. It’s what learners do that causes learning. Any product offers choices about uses, intentionally or not. The more empowering the product, the more meaningful the choices. For example, do exercise videos directly strengthen viewers?
    Many folks have written about this. This is my contribution:

    Ehrmann, Stephen C., (1995) “Asking the Right Questions: What Does Research Tell Us About
    Technology and Higher Learning?” in Change. The Magazine of Higher Learning, XXVII:2
    (March/April), pp. 20-27.

    • You’re asking about effectiveness, but I’m asking about efficacy. (See https://opencontent.org/archives/3991 .) Effectiveness (which is what we care about most) will always be lower than efficacy for reasons you hint toward above, but I think we should strive to understand efficacy anyway, since it frames or contextualizes our effectiveness findings.

      • I do have a sense of what you mean. Going back to my example of an exercise video, you’re only interested in the case of a person of some given fitness (or lack of fitness) who follows the instructions but goes no further than the instructions(?) what if the exercise increases their motivation and they start, jogging, too? is jogging part of the envelope of efficacy?

        Most learning situations have far more elements than user and product. Suppose there’s a composition course with faculty, lesson plans, handouts, and computers. (As well as other factors, present and past, also influencing what students do. How do you use the concept of efficacy to analyze that situation? I’m not familiar enough with “efficacy ” to do so.

Comments are closed.