I’ve written about Pearson’s efficacy work in the past, and Ray Henderson’s twitter post this morning has prompted me to ponder and write a bit more.
— Ray Henderson (@readmeray) October 14, 2015
Let me start by applauding Pearson for following through on their commitment to focus more on efficacy. As page 3 of the report states, this platform is “the first product at Pearson to have an efficacy framework built in from the very beginning,” and the report seems to have been enabled by this integration. This is a tremendous first step. However, I think there are a few very simple things Pearson could do to greatly improve future efficacy work. Inasmuch as Pearson have invited us to Advocate for Efficacy, that is what I want to do for the remainder of this post.
Efficacy vs Effectiveness
For some reason, Pearson have found it necessary to create new definitions of the terms efficacy and effectiveness. This is extraordinarily unfortunate because there are already perfectly good definitions established in the broader research community. As I wrote last month:
Efficacy refers to whether a drug demonstrates a health benefit over a placebo or other intervention when tested in an ideal situation, such as a tightly controlled clinical trial. Effectiveness describes how the drug works in a real-world situation. Effectiveness is often lower than efficacy because of interactions with other medications or health conditions of the patient, sufficient dose or duration of use not prescribed by the physician or followed by the patient, or use for an off-label condition that had not been tested. (How FDA Approves Drugs and Regulates Their Safety and Effectiveness, Congressional Research Service, p. 4. h/t wikipedia)
Our typical conversations about the efficacy of educational materials completely miss this critical distinction. Why does the distinction matter? Just as there are many sick people experiencing an “insufficient dose or duration of use” because they can’t afford their medicine, there are many students who experience an “insufficient dose or duration of use” of educational materials because they can’t afford them. When students who can’t afford their textbooks have to borrow them from friends or check them out from the library, they’re likely receiving an insufficient dose or duration of use. When students without friends in class or time to get to the library try to get by without using textbooks at all, they’re receiving no dose whatsoever.
Unfortunately, but perhaps not surprisingly, Pearson’s redefinitions of these terms omit this critically important distinction altogether (p. 5):
What Pearson Means by Efficacy and Effectiveness
- Efficacy describes whether a product or intervention has a positive effect on learning, such as reducing wrong answers, increasing retention rates, or raising final exam scores.
- Effectiveness measures the size of the educational improvement from a product or educational intervention
I don’t know much about the REVEL product. However, a quick search on Amazon reveals it to be not a textbook replacement, but a textbook supplement. Despite language in the report about how affordable the REVEL product is, from a quick scan of prices on Amazon the REVEL product apparently adds $55 – $80 to the cost of Pearson’s existing textbooks. For example, Amazon lists the electronic Pearson Psychology textbook as costing $135 to buy, and the aligned REVEL supplement as costing $71, meaning REVEL increases the cost of Introductory Psychology from $135 to $206. Now, to be fair, according to the case study the average test scores in the class using the product improved by 14%. That sounds terrific, but is it worth the increased cost? From an (FDA) effectiveness perspective, how many students will be financially capable of benefiting from this product?
Statistics, Rigor, and Credibility
The questions themselves that are asked and answered in the case studies are quite reasonable. However, the size of the groups studied are quite small (ns ranging from the 30s to the 90s) and, unless I missed it, no statistical test more sophisticated than a t-test is ever conducted. Despite claims in the report that the research team includes “PhD-level statisticians” (p. 5), the report is sorely lacking in sophistication and rigor. I don’t believe this paper would be approved as a dissertation in a graduate program on educational research. It certainly would not be accepted in a Tier 1 education journal.
It would not surprise me to learn that this report had been watered down significantly in order to be understandable by “normal people.” While that may be great for marketing, it does little to build credibility with people who actually understand what rigorous educational research looks like. Consequently, in addition to continuing to publish these public-facing whitepapers, Pearson needs to be publishing the more detailed / sophisticated versions of their studies in respected educational research journals. (And they need to purchase the open access option for these articles so we can all read them.) They currently partner with faculty to write these whitepapers – why not co-author peer-reviewed articles with these same faculty? That would simultaneously be more useful to faculty (in their tenure and promotion processes) and provide greater credibility for Pearson’s claims.
I’ll address this at greater length in a future post, but perhaps the best thing Pearson does in this report is to include a discussion of implementation strategies. The ways in which educational materials are used greatly influences their impact on student outcomes. This is critically important to understand. I believe we’re seeing this same effect across OER adoptions, and have labeled it the Remix Hypothesis.
Efficacy and Comparisons
It’s terrific that Pearson is being more aggressive about studying and communicating the efficacy of their products. However, there are still some fundamental problems with their framing and definition of terms, as well as a lack of sophistication in the work they’re making publicly available. Hopefully these will all improve over time and Pearson will give us better information about the usefulness of their products.
You may be wondering – why would David go out of his way to write a post that praises Pearson in any manner (even while criticizing it in others)? Part of the answer is that professional ethics demand that I give credit where credit is due – they’re heading the right direction with this work. Part of the answer is that the faculty member in me can’t pass up the chance to offer suggestions for improving what appears to be truly important work. But another reason is because we need good, solid data about the efficacy, cost, and effectiveness of Pearson’s and other commercial publishers’ products. The availability of these data enable truly objective, head-to-head comparisons of OER and commercial products, and set the stage for conversations about students’ academic return on investment in course materials. That’s a fight I know OER can win.
As you’ll recall from last years’ Babson Survey Research Group findings, faculty rated “proven efficacy” more important – by a large margin – than anything else when deciding what materials to assign to students. Proven efficacy is significantly more important to faculty than “trusted quality,” “ease of use,” “wide adoption,” and other historic proxies for efficacy:
Because efficacy is the single metric faculty care most about, it is the single most important metric for OER advocates to speak about. (As per above, I would say “effectiveness,” but you get the point). Consequently, the more data that Pearson and other publishers provide on efficacy, the more concrete the foundation is from which OER advocates can argue – as long as we continue to produce respectable effectiveness research of our own.
Whichever course material offering – whether commercial or open – provides the greatest benefit at the lowest cost deserves to win, plain and simple. The more efficacy research is published in credible outlets, the easier it is to see who’s winning. That’s why I want to see Pearson put their very best foot forward here. It’s also why OER effectiveness research needs to be putting its very best foot forward. Adopting commercial materials versus adopting OER is decision with multi-billion dollar implications. It is a decision we absolutely must get right, with significant impacts on state and federal grant and loan programs, student loan debt loads, drop, withdraw, and graduation rates, access to educational opportunity, and a range of other critically important issues – the least of which is not student learning.
Just this morning we received another relevant and interesting bit of data. According to new survey data from Inside Higher Ed and Gallup, it appears that faculty nationwide are recognizing the financial problems posed by commercial materials and recognizing the potential of OER:
Among the 2,175 survey respondents, more than 9 in 10 faculty believe that textbooks and other commercial course materials are too expensive and that faculty should be assigning more OER. Given that faculty can rarely agree on anything, 93% and 92% agreement seem like important and significant results.