As I travel the country (and the world) telling people about open educational resources, open textbooks, etc., I frequently receive questions about the quality of openly licensed instructional materials. I’ve answered this question enough that I thought it might be time to actually write something on the topic.
A Tiny Thought Experiment
Imagine you had a favorite textbook (hey – it’s a thought experiment). Now imagine receiving a letter informing you that the author has passed away and left you all the copyrights to the book. You immediately walk across the room and pull your copy off the shelf and open to the copyright page. You carefully cross out the words “All Rights Reserved” and replace them with the words “Some Rights Reserved – this book is licensed CC BY.” Have you changed the quality of the book in any way? No. Simply changing the text on the copyright page does not change the rest of the book in any way.
Consequently, we learn that quality is not necessarily a function of copyright status. We are forced to admit that it is possible for openly licensed materials to be “high quality.” We are also forced to admit that taking poor quality instructional materials and putting an open license on them does not improve their quality, either.
No Monopoly on Quality
Because quality is not necessarily a function of copyright status, neither traditionally copyrighted educational materials nor openly licensed educational materials can exclusively claim to be “high quality.” There are terrific commercial textbooks and there are terrific OER. There are also terrible commercial textbooks and terrible OER. Local experts must vet the quality of whatever resources they choose to adopt, and cannot abdicate this responsibility to publishing houses or anyone else.
Accuracy and OER
Some people are unable to believe that any process other than traditional peer review, licensing, and publication can result in content that is highly accurate. If you were to create a kind of content wild west, where anyone could publish anything and anyone could edit anything published by anyone else, this would obviously result in horrifyingly inaccurate content when compared to content produced via the traditional process.
Except that it doesn’t.
In 2005 Nature conducted an experiment in which they directly compared the accuracy of Wikipedia articles with the accuracy of traditionally reviewed, licensed, and published articles in Encylopedia Britannica.
They explain,
We chose fifty entries from the websites of Wikipedia and Encyclopaedia Britannica on subjects that represented a broad range of scientific disciplines. Only entries that were approximately the same length in both encyclopaedias were selected. In a small number of cases some material, such as reference lists, was removed to bring the length of the entries closer together.
Each pair of entries was sent to a relevant expert for peer review. The reviewers, who were not told which article came from which encyclopaedia, were asked to look for three types of inaccuracy: factual errors, critical omissions and misleading statements. 42 useable reviews were returned. The reviews were then examined by Nature’s news team and the total number of errors estimated for each article.
In doing so, we sometimes disregarded items that our reviewers had identified as errors or critical omissions. In particular, as we were interested in testing the entries from the point of view of ‘typical encyclopaedia users’, we felt that experts in the field might sometimes cite omissions as critical when in fact they probably weren’t – at least for a general understanding of the topic. Likewise, the ‘errors’ identified sometimes strayed into merely being badly phrased – so we ignored these unless they significantly hindered understanding.
The results?
Only eight serious errors, such as misinterpretations of important concepts, were detected in the pairs of articles reviewed, four from each encyclopaedia. But reviewers also found many factual errors, omissions or misleading statements: 162 and 123 in Wikipedia and Britannica, respectively.
With 42 usable reviews returned to Nature, this means the average article in both encyclopaedias contained 4 / 42 = 0.09 seroius errors, and 162 / 42 = 3.8 smaller errors per article for Wikipedia and 123 / 42 = 2.9 smaller errors per article for Britannica.
In other words, alternative authoring and review processes used to create openly licensed resources like Wikipedia can result in content that is just as accurate as the traditional peer review, publication, and licensing processes used to create works like Encyclopedia Britannica.
Distracting People from the Issue at the Core of Quality
Beyond issues of accuracy, when publishers, their press releases, and the media who reprint them say “quality” with regard to textbooks and OER, they actually mean “presentation and graphic design” – is the layout beautiful, are the images high resolution, are the headings used and formatted consistently, is the book printed in full color?
But this is not what we should mean when we talk about quality. There can be one and only one measure of the quality of educational resources, no matter how they are licensed:
- How much do students learn when using the materials?
There are two ways of thinking about this definition of quality.
- One is to realize that no matter how beautiful and internally consistent their presentation may be, educational materials are low quality if students who are assigned to use them learn little or nothing.
- The other way to think about it is this: no matter how ugly or inconsistent they appear to be, educational materials are high quality if students who are assigned to use them learn what the instructor intended them learn.
Really. For educational materials, the degree to which they support learning is the only meaning of quality we should care about.
Publishers put forth the beauty = quality argument because they have the capacity to invest incredible amounts of money in graphic design and artwork that visually differentiate their textbooks from OER. But when learning outcomes are the measure we care about, we see over and over again that many OER are equal in quality to commercial textbooks. (That is, over and over again we see OER resulting in at least the same amount of learning as commercial textbooks.)
We should never give into the temptation to focus on vanity metrics like number of pages or full color photos simply because they’re easy to measure. We have to maintain a relentless focus on the one metric that matters most – learning.
Great post David. If we assume that the quality gap is low and narrowing it begs the question as to what the “next” barrier will be. I really think that the lack of willingness to adopt free or low-cost open books stems from the same reason that so few coaches “got for it” on fourth down. If you succeed, it is no big deal and if you fail you end up a headline the next day. Lets assume for the moment that all books (open and closed) are all mediocre. Is it safer for a community college faculty member to adopt a mediocre book for their class has free e-copies and $10.00 for a paper copy versus a mediocre book that costs $80.00 for an e-copy and $140.00 paper copy from a big name publisher? If the book is mediocre and is an open book – the person who chose the book did a bad job and chose the mediocre book because they were “trying to save money”. If on the other hand, the book is mediocre and has a obscene price, “at least we got the best book money could buy” (i.e. the most expensive).
It seems to me that it is sadly not about quality. What we need (and you are working your tail off to make this happen) is to have some well-publicized successes with OER materials that cite savings in the hundreds of thousands of dollars that make it into the CHE over and over. We need to find a few targeted wins and promote the heck out of them.
Another great post David. I totally agree that learning needs to be the number one objective – front and center. However, as a person, who does some graphic design work, I also recognize that those visual elements can have a strong impact on comprehension and engagement. I think the quality that you are getting from multiple open educational resource editors could also be enhanced by opening up the visual design process of OER materials to collaborative design students. BYU has great illustration and graphic design programs….Maybe a call across campus can help to raise the curb appeal of some of these OER materials.
My story is mostly anecdotal, but I am always amazed at how few of my students LOOK at the graphics. When they are struggling with a concept, I will ask them, “well didn’t this table help?” or “let’s walk through the content visually (with the table)” – they reply, “we didn’t look at that table.” I have had them state to me that they never look at the images, or only look very cursorily, the direct quote was “if it is important they will put it in the text.” – they also tend to not look at all of the specially structured content (items in their own boxes, or example boxes, or further reading).
I am wondering if we as instructors overestimate the impact of visuals in textbooks – or just overestimate how much of the content we are providing for our students is actually ingested. Sigh….
I’m curious about data on the performance of learning resources. I’ve been examining xAPI as a means of gathering data on resource use, but there would also need to be correlation with outcomes. I think there’s a lot of resistance in the OER movement to this kind of tracking. As someone closely involved with faculty adoption of technology and resources, I know I’d have a better chance of ‘selling’ OER if I could demonstrate performance of the resources faculty currently use.
Yesterday an instructor was in my office upset students aren’t reading all the content in her course. I recommended she conduct the assessment before presenting the content to see what the students already know, or can figure out on their own, and then evaluate which resources might actually be needed. So much of faculty effort is spent curating unnecessary resources.
I believe we need to take a close look at what resources actually do contribute to learning outcomes. Just because a resource is adopted and is part of a learning experience, doesn’t mean that resource is contributing to the learning.
You might have dismissed the graphic design measure of quality too quickly, and generalised learning as a measure too broadly.
Intuitively we’d know that graphic design effects reading comprehension, usability, even reusability (selection and reuse as an action for learning), and motivation. I started searching for credible evidence to link here, but the design of my phone and the layout of the search and results, and the usability of this text input all demotivated me.
But I’m not trying to support the arguments publishers and their advocates make – at least not the ones we’re generalising about. But I think we should collect up the examples of OER that have been developed in the open and collaborative ways, and have gone on to acheive significant design quality as well.
Wikimedia foundation’s work on mobile themes, easier editing interfaces and partnership with PediaPress; boundless.com; and your own efforts to take a wiki text through a graphic design process for print. Lulu has a number of examples like this.
Regards,
Leigh
So the initial premise of the thought experiment seems fallacious. It assumes a situation of slapping a license on an existing work proves that the license is not the determining factor in quality. But it doesn’t look at the more common situation, that a work doesn’t exist yet and is produced in the context of a larger business process that may be commercially oriented or may be an individual wanting to create free shared content. In the former case, because there is an accumulation of capital based on previous activity, the author can afford to invest time into the production of the materials based on their assumption that they can recoup this investment down the road. The same is not true of the free author. In their case, they either need to arrange for the work to be subsidized ahead of time by some entity (their institution) or do it off the side of their desk. This is not to say that the license *itself* dictates quality, or that something produced in a commercial model is by definition of better quality, but the ability to guarantee resources *ahead of time* is built into one model and not the other. Some would argue that the “open” model is better as it ships whatever exists first and then improves it through iteration. May well be true. But this is the reason why on a regular basis the first releases of commercial products do outstrip their open alternatives, because they are built in a model in which future effort is already accounted for in past costing. For this to be a ore sensible discussion, it needs to look at “quality over time versus cost” and not simply “quality.”
Quality is not just tied to pretty pictures, flashy colors, and matching fonts, but most importantly to content. One of the reasons my program is shifting to OER (a collaborative effort of others and myself) is of out-of-date content. Lab books are being reissued each year under a new version number, but are not updating content, and, as a result, are using obsolete parts and practices.
In addition, the pedagogy is substantially different and not aligned with what we are trying to accomplish. The old standard is to follow procedures to the letter and answer multiple choice questions. Lather. Rinse. Repeat. Now, the labs build on each other and require students to think critically about what they are doing and to articulate their knowledge. They are asked to build things that are not just busy work, but which also have value and use.
I don’t have any flashy images, and no colors at this point, but all my fonts (mostly) match, and most importantly the students are engaged and interested in what new adventure the next lab will bring.