As I’ve been (re-)reading OER adoption research through a more critical lens I’m seeing a recurring pattern of significant threats to validity in the designs of studies purporting to measure the impact of OER adoption on student outcomes. While there are numerous methodological issues to consider, in this essay I’ll discuss three. Specifically, I’ll share:
- three questions you should ask when you read research about the impact of OER adoption on student outcomes,
- the reasons why you should care about the answers to those questions, and
- the questions the study is really addressing when the answer to any of the three questions is “no.”
Three questions to ask when you read research on the impact of OER adoption
Ask yourself these three questions when you read OER adoption research – especially research that claims to find a positive impact on student outcomes.
- Did the study control for differences in instructors?
- Did the study control for differences in instructor support?
- Did the study control for differences in the instructional design of the learning materials?
Why should you care about the answers to these questions? Because when the answer to any of them is “no,” the research you’re reading doesn’t actually tell you anything about the impact of OER on student outcomes – it tells you something completely different.
Why you should care about the answers to the three questions
Controlling for differences in instructors
Some instructors are better teachers (more effective teachers) than others. This might be because they employ more evidence-based teaching practices than other instructors. It might be because they demonstrate more care, support, and belief in their students’ ability to succeed. It might be because they’re more understanding and flexible when life happens to their students. And there are other important differences that, while they don’t make someone a better instructor, can threaten the validity of studies if differences in instructors aren’t accounted for. For example, some are “easier graders” than others. &c.
Many research studies fail to address the way instructors end up using OER. While there is occasionally another explanation (e.g., adjuncts teaching a course being required to use OER), the most common scenario is that each instructor is exercising their academic freedom to choose their own course materials. When instructors aren’t randomly assigned to treatment and control, or at a minimum when some kind of matching isn’t done (like propensity score matching), it is literally impossible to separate the influence of the instructor on student learning from the influence of the OER or TCM (traditionally copyrighted materials).
It’s critical to control for differences in instructors because the effect of the instructor on student learning can be significantly larger than the effect of the textbook they choose. If the researchers didn’t control for them, differences in student learning attributable to the impact of the instructors can drown out differences attributable to the impact of the course materials being open.
Controlling for differences in instructor support
OER research is often conducted in conjunction with department, college, institution, system-wide, or statewide OER initiatives. Instructors who participate in these initiatives often receive professional development, support from a librarian, support from an instructional designer, a course release, and / or a stipend. Instructors who don’t participate in the OER initiative don’t receive this support.
It shouldn’t surprise anyone when instructors who receive additional support teach more effectively than instructors who don’t receive that support. Consequently, when a research study describing research associated with an OER initiative fails to describe the support and incentives the initiative provided to faculty who used OER, we can’t tell whether any observed differences are attributable to the OER or to the additional support the instructor received. If the researchers didn’t control for them, differences in student learning attributable to the impact of the additional instructor support can drown out differences attributable to the impact of the course materials being open.
Controlling for differences in instructional design
Some learning materials are designed in ways that support student learning more effectively than others. Many learning materials are created by teams of people with deep expertise in the discipline, pedagogy, instructional design, the learning sciences, &c. Many OER are created by a single instructor with a graduate degree in their discipline and no training whatsoever in the design of effective instruction.
Most research studies on the effects of OER tell us literally nothing about the OER or the TCM used by the control group beyond what can be inferred about their licensing. What if one group of students is using a printed textbook while another is using an interactive platform that provides unlimited practice with immediate, diagnostic feedback? What if one instructor has chosen materials that come with powerpoint slides for use in lecturing, and another instructor has chosen materials that come with explicit supports for doing active learning during class time?
Or, to say it differently, imagine a drug trial where the research report describes a treatment group whose members took a pill, and a control group whose members took a different pill, but there’s no description of the active ingredients in either pill! That’s essentially what’s happening when there’s no description of the instructional design of the OER or the TCM.
Instructional design differences will drive larger changes in student outcomes than the open or proprietary licensing of course materials. If the researchers didn’t control for them, differences in student learning attributable to the impact of the pedagogical design can drown out differences attributable to the impact of the course materials being open.
So what questions does most of the research on the impact of OER adoption actually answer?
When a study fails to control for differences in instructors, what that study is really measuring is, “How do outcomes differ when students take courses from the kind of instructors who choose to adopt innovations like OER compared to the students who take classes from the kind of instructors who choose not to?”
When a study fails to control for differences in instructor support, what that study is really measuring is, “How do the students of well-supported instructors perform compared to the students of instructors who are not as well supported?”
When a study fails to control for differences in instructional design between the OER and the TCM, the study is really asking, “How do students taught using pedagogical approach X perform compared to students taught using pedagogical approach Y?”