Categories
open content

MIT OCW Funding Analysis (and Implications)

In an opinion piece for The Tech titled OpenCourseWare and the Future of Education, Ryan Normandin lays out MIT OCW’s funding breakdown. It’s the first time I’ve seen the numbers shared publicly. He begins by stating that MIT OCW’s budget is $4.1 million per year (though he notes that OCW cut $500,000 in costs for 2009), and then analyzes revenue by source:

Since its creation, 22 percent of OCW’s expenditures have been covered by the Institute, 72 percent has been paid for through grants from the William and Flora Hewlett Foundation and the Andrew Mellon Foundation, and 6 percent has been covered by donations, revenue, and other sources.

(His article states that these numbers are “since it’s creation,” but they’re the best breakdown of numbers I know of. If you know of a similar breakdown for MIT OCW’s 2009 finances, please drop a link in the comments below.)

If we work these numbers out, each year that’s roughly:

– $2,952,000 for 72% covered by Hewlett and Mellon,
– $902,000 for 22% covered by MIT internally, and
– $246,000 for 6% covered by donations, corporate sponsors, Amazon.com affiliate revenue, and all other sources of revenue.

Ryan’s article is an extended argument for why MIT should continue to support OCW after its grant funding runs out in two years. I (and I expect most readers of this blog) agree with the importance he places on the project and the very important public good it has become. More importantly, MIT OCW is terribly important to the broader field of open education.

Because MIT OCW receives such a large percentage of the OCW world’s traffic and media attention, potential problems for MIT OCW are potential problems for all of us.

I keep asking myself how you support a project when 3/4 of its funding is pulled out from under it. Two years is not that far away. And it already feels like I’m getting a “Please remember to donate to MIT OCW” email once a month. On 25% annual budget, what would MIT OCW do? If MIT OCW were to go into stasis (like USU OCW recently did), how would that be viewed by the world?

More importantly, what is Plan B for the broader OER field? Imagine that two years from now MIT OCW announces drastic cutbacks (or temporary suspension) of its program. How do the rest of us argue for open sharing on our campuses then? Perhaps these arguments would revolve around the sharing model, or the way sharing happens – “we’ll do it differently in the following way…” Perhaps they would revolve around business models and using OCW to generate revenue (e.g., by using them to market for-credit online courses). How else do we make the argument for open sharing on our campuses in a post-MIT OCW world?

I’ve already heard “just because MIT can do it doesn’t mean we can” about a thousand times from faculty and administrators. What if that becomes “Not even MIT can do it for longer than a few years…”

Categories
open content

The LHC and Education

I’ve always been impressed by the idea of the Large Hadron Collider. It’s an unthinkably expensive, large-scale experimental apparatus designed for the sole purpose of generating and collecting data. Why would countries spend so much money on data? Why would so many people dedicate the better part of their lives to a project like the LHC? Because the so-called “hard” sciences – fields like physics and astronomy – have made the remarkable progress they have in understanding the structure of matter and the nature of the universe because they really care about data. They care about data in a way that educators have a difficult time comprehending, let alone understanding.

The data that we, educators, gather and utilize is all but garbage. What passes for data for practicing educators? An aggregate score in a column in a gradebook. A massive, course-grained rolling up of dozens or hundreds of items into a single, collapsed, almost meaningless score. “Test 2: 87.” What teacher maintains item-level data for the exams they give? What teacher keeps this data semester to semester, year-to year? What teacher ever goes back and reviews this historical data? After a recent tweet on this topic, a number of colleagues accused me of having physics envy. Believe me, you don’t have to wish you were a physicist to be disappointed by the quality of data educators have access to.

I’m beginning to believe that we’ve got it completely backwards. For decades we’ve been trying to use technology to improve the effectiveness of education. How, specifically, have we tried to use technology? At a high level, we’ve tried to use it to deliver content to learners. The goal has been to “find something that works,” and then deliver that something (interactive content, etc.) to learners at high fidelity and low cost. In our attempts to deliver effective content at scale, I believe we have had a nationwide (if not worldwide) encounter with the reusability paradox, which I first wrote about at length in 2001. Briefly stated, the reusability paradox says that, due to context effects, the pedagogical effectiveness of content and its potential for reuse are orthogonal to another. This finding is too inconvenient to accept, as it would destroy or severely maim the prominent paradigm of educational technology research, and so it has been roundly ignored by the educational research community.

While using technology to deliver content seems to have had no noticeable impact (or even a slightly negative) on the effectiveness of education, using technology to deliver content has had a huge impact on the accessibility of education. Think of distance learning… Think of opencourseware and open educational resources… Think of the millions of people who now have access that never would have had access otherwise. The impact of using technology to deliver content on increasing access to education is completely unassailable and totally undeniable.

So, if using technology to deliver content is not improving the effectiveness of education, is there another way we might use technology that can? I believe there is. I believe it so strongly that for the first time in several years I am opening a new line of research. I believe (and I fully admit that it is only a belief at this point) that using technology to capture, manage, and visualize educational data in support of teacher decision making has the potential to vastly improve the effectiveness of education. Think of it as “educational data mining” or “educational analytics.” For example, think of all the data, algorithms, and resources that go into selecting ads to show in search engine results and other places around the web, and then think of using all that horsepower to make suggestions to teachers about appropriate opportunities to intervene with students.

The Open High School of Utah is the first context in which I’m studying this use of technology. Because it is an online high school, every interaction students have with content (the order in which they view resources, the time they spend viewing them, the things they skip, etc.) and every interaction they have with assessments (the time they spend answering them, their success in answering them, etc.) can all be captured and leveraged to support teachers. The OHSU teaching model, which we call “strategic tutoring,” involves using these data to prioritize which students need the most help and enabling brief tutoring sessions. A teacher’s typical day involves visiting the dashboard, viewing the first student in a prioritized list of students, seeing what s/he needs help on, and engaging him/her by Skype, phone, IM, or other means, for a very brief, very targeted individual tutoring session. Then the next student, then the next student, etc. Students who are on track or working ahead in the online curriculum don’t have to wait for an interaction with the teacher (they’re succeeding, after all), and those who need help get it – individualized, just in time, and sometimes before they even know they need it. From a caring human being – not a supposedly intelligent tutoring system.

Now, if the OHSU wasn’t delivering content online we couldn’t capture all this data. So in one sense, it’s key to deliver content online – if only to get the types of data we need to support teachers supporting students. But currently, we’re stopping short, confusing the means for the end.

Another realization that comes part way down this path is that our instructional design programs may teach people how to design instruction that is motivating and engaging, but we don’t even begin to teach people how to design materials and systems that capture the right kinds of data. We don’t even discuss what the “right” kinds of data might be.

Coming back to the LHC, I think meaningful progress in education will depend on educators becoming infected with a passion for data like the LHC embodies. Not rolled up percentile scores, coarse-grained data that obscure all the meaningful details we might care about. We need access to real-time data on every individual student every day of the year, we need tools and techniques for supporting teachers in interpreting the data, we need new teaching models that leverage the existence of these data and tools, etc. This is what I think technology-enhanced education is supposed to be.

The investment it would take to deploy such an infrastructure would rival the cost of the LHC, but would be almost impossible to make – because educators either don’t care about data or have a vision of data that is limited by their own experience recording things in a gradebook or spreadsheet. Using technology in creative ways could provide us with so much more data it would boggle the imagination… It could transform the teacher’s work from one based on hunches and intuitions to one actually based on data. And low and behold, we might actually move the needle a bit when we combine the best of hardcore empiricism with the best of caring, nurturing people.

We’ll certainly never meet Bloom’s 2 sigma challenge if we think the proper role of technology in education is simply delivering content (whether interactive, intelligent, or otherwise). However, if we get serious about capturing and using data to support teacher decision-making and improve student learning, we may have something.

Categories
a2k open content

July BYU IS OCW Update

Two exciting bits of news from the ongoing BYU Independent Study OCW trial. There’ll be loads more data / graphs / etc. in our presentation at Open Ed 2009 next week.

First, things seem to be remarkably stable on the “conversion to paying customers” side of the study. Out of 9179 visitors to the OCW site, 270 have become paying customers of BYU IS (that’s 2.94%). This number is sticking right around 3%.

Second, the final cost data for converting BYU IS courses to OCW have come in. As you may recall, there are three high school courses and three university courses in the trial. Our strategy was to create automated transforms to do most of the work of reformatting courses for publication in BYU IS OCW, and do as little “by-hand” clean up as possible. (BYU IS owns the IP in its online courses, so there is no IP scrubbing to do.) Consequently, the first HS and the first university course have rather high conversion costs – because we billed the creation of the transformation scripts to the first course in each area. The course conversion costs are remarkably low:

High School Courses
GOVT 45: $5,204
EARTH 41: $1,204
GEOG 41: $1,142

University Courses
TMA 150: $3,485
BUS M 418: $320
SFL 110: $248

As a comparison point, MIT OCW estimates a cost of approximately $15,000 per course to publish syllabi and other materials used in teaching on-campus MIT classes. In contrast, the BYU IS OCW courses are complete courses designed from the beginning for online learning. With the transforms written and used a few times, we now know what it would cost if the decision were made to release more BYU IS courses as OCW in the future: about $1150 per high school course and about $250 per university course.