There’s been a lot of discussion about open textbooks, efficacy research, and student cost savings in the wake of this year’s #OpenEd15. The general theme of the conversation has been a concern that a focus on open textbooks confuses the means of open education with the end of open education. I’m compiling a Storify of examples of this really engaging writing – you should definitely take the time to read through it. I’m not responding directly to many of the points made in those posts here, but will in later follow-up posts.
The overall criticism about ends / means confusion may or may not be true – it depends entirely on what you think the end or goal of open education should be. This is a conversation we almost never have in the field of open education. What is our long-term goal? What are we actually trying to accomplish? What kind of change are we trying to create in the world? The recently published OER strategy document, as informative as it is, reads more like a list of issues and opportunities than what Michael Feldstein describes as “rungs on a ladder of ambition.” Answering these questions leads to additional, more proximate concerns, like what specific steps do we need to take to get from here to there? In his #OpenEd15 keynote, Michael pushed our thinking with some additional questions, like “Who are we willing to let win?”
As I have reflected on the post-conference conversation, and these larger questions about goals and purpose, I’ve decided to share some of my current best answers to these questions. (Disclaimer: my answers are guaranteed to evolve over time.) Your answers will almost certainly be different than mine – and that’s a good thing. I’m not sharing my answers as a way of claiming that they reflect the One True Answer. I’m sharing them in the hope that they will prompt you to think more deeply about your own answers. I find that nothing helps me clarify my thinking quite like reading others’ thinking I disagree with. As we all take the opportunity to ask and answer these important questions for ourselves, and to do that thinking publicly, out loud, who knows what might happen?
When someone cites the College Board number, they often (but not always) do so in the process of trying to lead their listener to the conclusion that textbooks are too expensive. Not just really expensive. Too expensive. In the textbook context, too expensive means “so expensive as to be harmful to students.” The College Board number typically surfaces in an argument that runs along the lines of – textbooks are too expensive, thus harming students, and for the sake of students we should do something about the cost of textbooks.
When someone cites the student survey number, they often (but not always) do it in the process of reacting to the College Board number, as if to say “See? Textbooks aren’t nearly as expensive as some would lead you to believe. The situation isn’t that bad.” And, by implication, students are doing ok.
My question is this: if the issue we want to discuss is the impact of textbook costs on students, why don’t we just go straight to the data that deal directly with the impact of textbook costs on students? When we dip our toe in the $1200/$600 debate we’re likely to raise questions among listeners that will only distract them from the issue we’re actually trying to discuss.
Rather than using cost data as a proxy for impact on students, let’s talk about what the data say the actual impact of textbook costs is on students.
One of the best sources of data available on this subject are the Florida Virtual Campus surveys. The most recent, including over 18,000 students, asks students directly about the impact of textbook costs on their academic career:
What impact does the cost of textbooks have on students? Textbook costs cause students to occasionally or frequently take fewer courses (35% of students), to drop or withdraw from courses (24%), and to earn either poor or failing grades (26%). Regardless of whether you have historically preferred the College Board number or the student survey number, a third fact that is beyond dispute is that surveys of students indicate thatthe cost of textbooks negatively impacts their learning (grades) and negatively impacts their time to graduation (drops, withdraws, and credits).
And yes, we need to do something about it.
Thankfully, faculty are already well aware of the problem. According to a recent Inside Higher Ed / Gallup poll, more than 9 in 10 faculty agree that textbooks and other commercial course materials are too expensive:
According to the poll, faculty also overwhelmingly agree that OER are a viable solution to the problem of textbook costs: more than 9 in 10 faculty believe that they should be assigning more OER. Now we just need to help and support them as they make that change.
(Another very real impact of textbook costs on students is their contribution to student loan debt. That’s an important conversation, but one that I’ll save for later.)
For almost three years Lumen Learning has been helping faculty, departments, and entire degree programs adopt OER in place of expensive commercial textbooks. In addition to saving students enormous amounts of money we’ve helped improve the effectiveness of courses we’ve supported, as we’re demonstrating in publications in peer-reviewed journals co-authored both with faculty from our partner schools and other researchers. We’re making great friendships along the way. It’s been absolutely amazing.
Last year we received one of seven grants from a Bill and Melinda Gates Foundation competition to create next generation personalized courseware. We’ve spent the last year working with something like 80 faculty from a dozen colleges across the country co-designing and co-creating three new sets of “courseware” – cohesive, coherent collections of tools and OER (including some great new simulations, whose creation was led by Clark Aldrich, and newly CC licensed video from the BBC) that can completely replace traditional textbooks and other commercial digital products.
As part of this work we’ve been pushing very hard on what “personalized” means, and working with faculty and students to find the most humane, ethical, productive, and effective way to implement “personalization.” A typical high-level approach to personalization might include:
building up an internal model of what a student knows and can do,
algorithmically interrogating that model, and
providing the learner with a unique set of learning experiences based on the system’s analysis of the student model
Our thinking about personalization started here. But as we spoke to faculty and students, and pondered what we heard from them and what we have read in the literature, we began to see several problems with this approach. One in particular stood out:
There is no active role for the learner in this “personalized” experience. These systems reduce all the richness and complexity of deciding what a learner should be doing to – sometimes literally – a “Next” button. As these systems painstakingly work to learn how each student learns, the individual students lose out on the opportunity to learn this for themselves. Continued use of a system like this seems likely to create dependency in learners, as they stop stretching their metacognitive muscles and defer all decisions about what, when, and how long to study to The Machine. This might be good for creating vendor lock-in, but is probably terrible for facilitating lifelong learning. We felt like there had to be a better way. For the last year we’ve been working closely with faculty and students to develop an approach that – if you’ll pardon the play on words – puts the person back in personalization. Or, more correctly, the people.
It’s About People
Our approach still involves building up a model of what the student knows, but rather than presenting that model to a system to make decisions on the learner’s behalf, we present a view of the model directly to students and ask them to reflect on where they are and make decisions for themselves using that information. As part of our assessment strategy, which includes a good mix of human graded and machine-graded assessments, students are asked to rate their level of confidence in each of their answers on machine-graded formative and summative assessments.
This confidence information is aggregated and provided to the learner as an explicit, externalized view of their own model of their learning. The system’s model is updated with a combination of confidence level, right / wrong, and time-to-answer information. Allowing students to compare the system model of where they are to their own internal model of where they are creates a powerful opportunity for reflection and introspection.
We believe very strongly in this “machine provides recommendations, people make decisions” paradigm. Chances are you do, too. Have you ever used the “I’m Feeling Lucky” button on the Google homepage?
If you haven’t, here’s how it works. You type in your search query, push the I’m Feeling Lucky button, and – instead of showing you any search results – Google sends you directly to the page it thinks best fulfills your search. Super efficient, right? It cuts down on all the extra time of digging through search results, it compensates for your lack of digital literacy and skill at web searching, etc. I mean, this is Google’s search algorithm we’re talking about, created by an army of PhDs. Of course you’ll trust it to know what you’re looking for better than you trust yourself to find it.
Except you don’t. Very few people do – fewer than 1% of Google searches use the button. And that’s terrific. We want people developing the range of digital literacies needed to search the web critically and intelligently. We suspect – and will be validating this soon – that the decisions learners make early on based on their inspection of these model data will be “suboptimal.” However, with the right support and coaching they will get better and better at monitoring and directing their own learning, until the person to whom it matters most can effectively personalize things for themselves.
Speaking of support and coaching, we also provide a view of the student model to faculty and provide them with custom tools (and even a range of editable message templates written from varying personalities) for reaching out to students in order to engage them in good old-fashioned conversations about why they’re struggling with the course. We’ve taken this approach specifically because we believe that the future of education should have much more instructor – student interaction than the typical education experience today does, not far less. Students and faculty should be engaged in more relationships of care, encouragement, and inspiration in the future, and not relegated to taking direction from a passionless algorithm.
This week marks a significant milestone for Lumen Learning, as the first groups of students began using the pilot versions of this courseware on Monday. Thousands more will use it for fall semester as classes start around the country. This term we’ll learn more about what’s working and not working by talking to students, talking to faculty, and digging into the data. We’ll have an even more humane, ethical, productive, and effective version of the courseware when we come out of the pilot in Spring term. And an even better version for next Fall. (We’re really big on continuous improvement.)
This stuff is so fun. There’s nothing quite like working with and learning from dozens of smart people with a wide variety of on the ground, in the trenches experience on the teaching and learning side, and being able to bring the results of educational research and the capabilities of technology into that partnership. You never end up making exactly what you planned, but you always end up making something better.