Prelude, Percolation, and Preparation

As a general rule, I don’t believe that colleges and universities understand why students are willing to go tens or hundreds of thousands of dollars into debt to attend them. My experience has been that individual faculty are even less likely to understand these reasons than the leaders of their institutions.

Faculty want college to be a journey of self-discovery and self-improvement for students that catalyzes a lifelong love of learning and blossoms into genuine curiosity about the world around us; a time in which students develop critical thinking, civic-mindedness, and other attitudes, values, and skills that will help them develop into truly wonderful human beings.

However, not many people are willing to pay $20k or $50k or $200k for this collection of experiences alone. Most people are willing to invest financially in a college education because a college education pays a clear financial return. We can bemoan this instrumental or even transactional view of education, but bemoaning it doesn’t change it.

An interesting question to ask is “why is a college degree the gateway to greater financial success and stability?” Among the many sophisticated ways this question might be answered, there is one ruthlessly practical way in which the relationship between degrees and earnings is both codified and enforced – degree requirements in job descriptions.

Employers just refuse to hire people without college degrees into many kinds of positions. By requiring a college degree for a person to even be eligible for a position, employers create demand for college degrees. And it is this decision by employers – and this decision alone – that makes college degrees so financially valuable. Would you spend $100k to become eligible for a job if there were less expensive ways to gain that eligibility? Probably not.

Another interesting thought exercise, then, is to consider the question: what would happen to the value of a college degree if employers stopped requiring college degrees? More specifically, how would students’ willingness to pay for college degrees change if employers stopped requiring college degrees? (Here’s a refresher on the relevant economics.) And how might the higher education ecosystem change if students’ willingness to pay shifted downward dramatically? And what if that happened relatively quickly?

I’ve been watching this situation with some interest since I first put these pieces together about a decade ago (I’m sure I wasn’t the first to do so). So I was interested this week to read that Google, Apple and 13 other companies no longer require employees to have a college degree. Do you think other big employers will follow their lead? Will small employers? How long would it take for a change in job description writing and hiring practices to percolate throughout a majority of employers – to reach a “tipping point” as it were?

And what can higher education be doing ahead of that event to prepare itself?

RISE and Instructional Design

Matt Crosslin has posted a thoughtful response to our RISE article from last year. An open implementation of RISE was recently published in the Journal of Open Source Software. Since Matt took the time to engage so thoughtfully, I wanted to respond in kind. (Also, it’s a breath of fresh air to write a little about instructional design… it’s good to get back to your roots.)

[T]he bigger concern with the way grades are addressed in the RISE framework is that they are plotting assessment scores instead of individual item scores.

Actually, we’re using neither individual items nor the entire assessment score. We’re using testlets, small bundles of individual outcome-aligned items. On average, we’re looking at a group of four individual items per individual learning outcome. This gives us  better construct validity than a single item, while avoiding the many problems you correctly identified with using an entire assessment.

The biggest concern I have with the RISE framework really comes here: ‘The framework assumes that both OER content and assessment items have been explicitly aligned with learning outcomes, allowing designers or evaluators to connect OER to the specific assessments whose success they are designed to facilitate’…. To explicitly align assessment with a content is not just a matter of making sure the question tests exactly what is in the content, but to also point to exactly where the aligned content is for each question. Not just the OER itself, but the chapter and page number…. [I]f you could actually compare the grades on individual assessment items with the amount of time spent on the page or area that that specific item came from, you might be on to something.

One of the reasons we published this framework is to show people the power that comes from doing good (and hard) design work. Matt’s absolutely correct that RISE analysis is quite opinionated about the kind of course it can be used with. You really do need to have every individual item outcome aligned. Your algorithm for building assessments from an item pool needs to be outcome-aware to insure sufficient coverage. Outcome alignment with content needs to be done at the individual page level. &c. The courses we’re using RISE analysis with meet all these criteria. Hopefully, people will look at RISE and the continuous improvement work it enables and say, “I’m willing to put in the design work if that’s part of what I get in return.”

If you could group students into the four quadrants on each item, and then compare quadrant results on all items in the same assessment together, you could probably identify the questions that are most likely to have some kind of issue. Then, have the system send out a questionnaire about the test to each student – but have the questionnaire be custom-built depending on which quadrant the student was placed in. In other words, each learner gets questions about the same, say, 5 test questions that were identified as problematic, but the specific question they get about each question will be changed to match which quadrant they were placed in for that quadrant.

This is a super interesting idea. My thinking to date has revolved around engaging faculty in the continuous improvement process, and I’m hoping to blog about this later this week or early next. But I definitely want to consider the possibilities with this approach.

My idea of a well-designed course involves self-determined learning, learner autonomy, and space for social interaction (for those that choose to do so). I would focus on competencies rather than outcomes, with learners being able to tailor the competencies to their own needs. All of that makes assessment alignment very difficult.

That doesn’t sound that different from my idea of a well-designed course. In the ID world I fear there’s a sense of (false) dichotomy between content and assessments that are well-designed and well-aligned on the one hand and spaces of self-determination, autonomy, and society on the other. True, these have historically been two very different ways of thinking about course design. But why can’t you provide well-designed and well-aligned content and assessment as a foundation that anticipates these other activities? Nothing says the quizzes associated with these core course materials have to account for the majority of students’ grades – other assessments that invite students to exercise more autonomy can be weighted more heavily. I believe there’s more room for bringing together a diversity of instructional design approaches than we’ve sometimes been able to see in the past. I’m hoping to write about this more in the future, too.

Matt’s response makes it clear that I should also do more writing about the kind of instructional design that RISE assumes. (For example, in addition to all the alignment issues discussed above, it only works when all of a course’s content is openly licensed – otherwise, you can’t fix any of the problems you find.) I’ll try to work that in to my upcoming post about engaging faculty in continuous improvement. That post is getting longer by the day…

Thoughts on OER and Cost Savings

Yesterday, Phil Hill wrote about OpenStax’s new method for calculating the savings students see when their faculty adopt OER.

Welcome Change: OpenStax using more accurate data on student textbook expenditures


His article highlights these paragraphs from Rice University’s recent press release:

Our community is creating a movement that will make a big impact on college affordability. The success of open textbooks like OpenStax have ignited competition in the textbook market, and textbook prices are actually falling for the first time in 50 years.

As a result of the unprecedented downward shift in textbook prices, OpenStax will be decreasing its estimated student savings figure from $98.57 to $79.37 based on federal data. The U.S. Department of Education’s National Center for Education Statistics published a study in May stating the average undergraduate student spent $555.60 on required course materials for the academic year. Dividing that number by seven courses (the undergraduate average, according to enrollment data) comes out to $79.37 in savings for each student using an OpenStax book.

I agree with Phil that it’s great to see OpenStax updating the baseline from which they calculate student savings.

What strikes me as odd about OpenStax’s new way of calculating the savings associated with OER is that it ignores some fairly important and well-understood things about student spending when faculty adopt OER – things OpenStax has a made a pillar of their long-term sustainability plan. And if you’re going to update the way you figure savings, why not fix all the issues with your savings estimate at once?

In her 2010 whitepaper for the Student PIRGS titled A Cover to Cover Solution: How Open Textbooks are the Path to Textbook Affordability, Nicole Allen estimated that students whose faculty adopt OER spend, on average, $27.68 (see Table 3, page 12). This figure is explained further in the section titled “Open textbooks could reduce costs by 80% overall.” (Spoiler: some students purchase a printed version of the open textbook, some students spend money printing chapters themselves at Kinkos, etc.)

This research is now eight years old and definitely needs to be refreshed. Student attitudes toward print and reading online have likely changed. The cost of purchasing a printed copy of an open textbook may have changed, though the average price of a printed OpenStax title doesn’t seem to be any lower than the average price of the Flat World Knowledge titles Nicole’s research examined. But regardless of how and how much these indicators have moved, the main point here is this – the amount of savings that come from OER adoption does not equal 100% of what students would have paid for other materials.

While the field more broadly desperately needs an updated version of Nicole’s research, OpenStax does not. They already know how much money students spend when their faculty adopt OpenStax.

OpenStax is quite public about the fact that print textbook sales and the revenue generated through OpenStax Partner vendors play an important role in their sustainability model. (See their FAQ). They’re using these revenue streams to sustain their organization, so they obviously know exactly how much of this revenue they’re receiving. Some percentage of students who use OpenStax purchase a printed copy from Amazon. Some percentage of students who are assigned an OpenStax book purchase a homework solution from a vendor who is an OpenStax Partner.  The percentages of students who are making these purchases must be fairly high, or it wouldn’t be much of a model for sustaining a 50+ person organization.

My question is, as long as they’re updating their savings estimate, why not account for the money students spend when their faculty adopt OpenStax? If the average student using OpenStax is spending $30 on a printed OpenStax book or on a homework system from an OpenStax Partner, why not decrease the savings estimate by $30? Or $25? Or whatever the number is?

Savings from OER should be calculated as “the amount of money students would have spent” minus “the amount of money students did spend.” We know that last number isn’t $0. OpenStax’s sustainability model is predicated on that number not being $0. Yes, in the early days of the field many of us (first person inclusive) estimated cost savings as if students whose faculty adopted OER spent $0. But we know better now. Why won’t we say it out loud?

Why does it matter how we calculate savings, you may ask? The way we calculate savings estimates is incredibly important for us as a field. As our estimates of student savings from OER grow larger and larger, they are more likely to catch the eye of policymakers and others who will subject them to rigorous review. If our savings estimates don’t stand up to basic scrutiny, the credibility of the rest of our work will probably also be called into question. We don’t need that headwind.