Thoughts on Conducting Research in MOOCs

One of the philosophical underpinnings of MOOCs as practiced by Siemens, Downes, et al. has been the rejection of the idea of pre-defined learning outcomes. For example, the LAK12 syllabus reads in part:

“You are NOT expected to read and watch everything. Even we, the facilitators, cannot do that. Instead, what you should do is PICK AND CHOOSE content that looks interesting to you and is appropriate for you. If it looks too complicated, don’t read it. If it looks boring, move on to the next item.” The learning outcomes will, consequently, “be different for each person.”

This makes MOOCs almost completely immune to rigorous investigation with regard to how they function as a means of facilitating learning. There can be no uniform pre-test. There can be no uniform post-test. MOOCs make a loud point about the fact that they don’t teach anything in particular. No one is supposed to learn anything in particular. Consequently, there are no broad outcomes to measure. Ergo, it is difficult to say anything about MOOCs from the perspective of whether or not they succeed in facilitating learning, at least under the traditional group “learning gains” paradigm of educational research.

If we can’t inquire broadly about the educational effectiveness of MOOCs, perhaps we can at least inquire broadly about the attitudinal impacts of MOOCs on participants. In a traditional context, learning analytics would correlate various behaviors, degrees of behavior, and patterns of behavior with pre-defined, uniform learning outcomes. It seems that an interesting parallel approach to learning analytics in the MOOC context would be to correlate a variety of behaviors with the degree of satisfaction experienced by MOOC participants.

In other words, instead of using grades as the dependent variable in MOOC research, we might for example imagine using responses to a satisfaction survey. Rather than asking, “did engaging in this highly designed set of activities help a person learn what we were hoping they would learn?” we might instead ask, “did engaging in a unique set of activities help this person reach the specific outcome(s) they were hoping to achieve when they enrolled in the MOOC?”

What might we learn from this kind of research? Are MOOCs giving people the knowledge / experience / other outcomes what they’re hoping for? Are there certain patterns of behavior among MOOC participants that seem to correlate more highly with satisfying experiences than other patterns of behavior? Etc.

BYU PhD student Michael Atkisson and I are going to work on this for his dissertation. Has someone else already started down this road? Am I missing something? Are there some satisfaction data somewhere we should be starting from?

15 thoughts on “Thoughts on Conducting Research in MOOCs”

  1. Here is my response to David Wiley’s post on Thoughts on conducting research into MOOC
    Yes, David, we (and I) have done some researches into MOOC with the past 2 researches, and so please see the papers under publication for details of the researches.
    You will find my research posts here, here and here.
    There is a research group with MOOC Change11 where “we” have discussed all the options that you mentioned in your post.  I reckon that it would be worthwhile to explore the learners’ experience.  However, when it comes to participants’ satisfaction, it is a rather subjective measure and would not necessarily be a valid and reliable way to measure the “learning outcomes” of the course, as George and Stephen have stated clearly what are to be achieved in MOOC.  Such measure of satisfaction tends also to relate strongly to peoples’ attitudes towards certain ways of learning (the learning habits), or their preferred learning styles (again this is a controversial topics, where Roy, Jenny and I had tried to dig into in CCK08), and though I think there was a pattern emerging out of the research, it could be difficult to generalize on how people learn (most effectively, or purposely).

    The emotional aspects and critical thinking (reasoning) of participants would also significantly impact on how participants value the course, based on their experience.  This is especially profound when people new to the course have difficulties in making sense of the learning, with a sense of isolation, due to the abundance of information at the beginning of the course, or when they didn’t feel their voices being heard, and so could withdraw from the connections or posting of blogs or comments on forum.  These would naturally led them to become lurkers, remain as lurkers, throughout the course, or dropouts, if they didn’t find enough interests in the course.  This seem to relate to the participants’ needs and expectations, motivation and autonomy.

    My past experience with research was: you could get very positive responses from a small sample of the participants (who were active participants, and would likely participate in your research).  However, those who were lurkers might not be too interested in responding.  Those who responded provided us with a range of “perceptions” and “experiences” from very positive to the not that positive (though these were always a few).  We still need to conduct research to understand all these learning experiences in a better way.

    There are many others who have conducted researches into MOOC, with George, Stephen, Roy, Jenny, Frances, Rita, Helene, Wendy, and Antonio.

    I have a few questions though:

    1. Aren’t we all seem to be conducting researches in an “island of researches” mode?    On one hand, we are supporting and encouraging open learning, open research, but on the other hand, we all seem to be afraid of sharing our researches in fear that others would get ahead in researching and publishing them first in academia.  That’s seems to be at odds to the Open research golden paradigm.  But is that the reality?

    2. What could be done to make researches on MOOC more collaborative, or cooperative?  Is networked MOOC research feasible?  What are the pros and cons of conducting research in an open, transparent manner?

    3. Finally, I understand that PhD candidates have to conduct researches more independently, as they have to publish their papers to get their qualification.  Would that limit the possibility of doing research in a cooperative manner with other researchers, especially in an institutional environment?

    4. Is open researchers (similar to open scholar) the way to go in future research?

    More sharing in forthcoming posts.

  2. In the smaller SMOOC I’m working with, it will indeed be central to any research that the participants were able to achieve their own goals, although these goals might be adapted as the course goes one.

    So yes, I accept as valid the idea that the question should be “did engaging in a unique set of activities help this person reach the specific outcome(s) they were hoping to achieve”, but we might need to use not only those outcomes they stated at the beginning, but also the outcomes they create as they go.

  3. I strongly believe that we need more expanded research designs than in traditional (experimental) research on learning. So I agree with you David, but I also believe that satisfaction surveys are quite limited to clearly reveal the underlying patterns. We can use so-called Happy Sheets after a workshop or an open course and we will get positive feedback. But what does this say about the “real” learning experiences? There is a long debate in this regard on the limitation of Kirkpatrick’s evaluation model. So we should avoid getting into old traps again. Especially when learning is exciting as it is in a MOOC.

    I like the idea of open research and it might be a good aspect for upcoming MOOCs to include a research area (posted on the central website) with questions and discussions. Some of them might then be conducted “on the fly” such as surveys other might be prepared for future studies.   

  4. I didn’t look at MOOCs in my dissertation, but I did look at relationships (satisfaction, perceived learning, learning outcomes as assessed by instructors) in “Communities of Inquiry” (Garrison, Anderson, Archer, et al.) … a different animal, but having some of the same research considerations. As you note, assessment of learning outcomes without defined learning objectives is problematic, which makes research on learning effectiveness without an examination of learning outcomes problematic. However, just because there is no set list of objectives (as defined by the instructor / course designer), doesn’t mean the MOOC participants (or the people facilitating the sessions) don’t have objectives in mind. As I’ve joked with Dave, the title of a course hints at the objectives. For example, in an Introduction to Open Education course, it seems a fair assumption that “participants will be able to compare and contrast approaches to open education”. Given the “take what you want / need” approach of a MOOC, it may be an interesting research approach to round up a small-ish (i.e. manageable) group of MOOC participants and map out their personal objectives and follow their outcomes based on those objectives. Questions that are interesting (to me) include: What is the nature of participant objectives? To what extent are those objectives met? 

    In addition, as others have noted in the comments, satisfaction is a squishy measure … that most studies suggest has little (or no) correlation with objective measures of learning (as I saw in my dissertation) … same can be said with learner perceptions of learning. Yet, satisfaction and perceived learning are interesting to consider with regard to PERSISTENCE in a MOOC. Regardless of the presence or quality of the learning objectives, if the person quits (or never starts) the course, it would be hard to argue that it was an effective educational experience for that learner (although Dave would likely disagree with me). My very informal MOOC observations (and personal participation in a few MOOCS) suggest a high level of initial interest, but low levels of “persistence” in the majority of participants. Therefore, I think it would be interesting to consider the relationships between satisfaction, perceived learning, and persistence in a MOOC … and also to consider other factors that may (or may not) influence persistence in a MOOC (designed features, such as live sessions / discussion boards?; personal characteristics, such as age / prior learning / professional affiliation with topic?). Along this same line, I feel MOOCs (in their most common / current iteration) favor a certain type of learner (those who tend to be early adopters, self-starters, comfortable with messiness, comfortable with written communication, etc.) However, what about the other 99% of the population, including those who don’t want to publicly share their work, aren’t interested in the topic, want / need more hand-holding, etc.?   

  5. Hi David – I can connect with you off list if you’d like, but we are launching an open research project in a few weeks that might be of interest to Michael as it emphasizes addressing these kinds of complex research challenges through distributed networks rather than the traditional “faculty-student” relationship (though that is certainly a part of it). If Michael is interested, have him contact me and I’ll share more info on the structure of this project.

    In addition to some of the research projects already shared, Allison Littlejohn and colleagues at the Caledonian Academy are researching self-directed learning in MOOCs…

  6. Thanks everyone for sharing your thoughts and approaches to the research David and I have been discussing. It is very encouraging to see the thought that you all have put into this area of inquiry that I am beginning to grapple with. David, thanks for reaching out. This has been a tremendous help. Hope all is well at SxSW.

  7. Michael, count yourself lucky to have a PhD committee chair with a great network and the willingness to blog on your behalf!

    I am following the LAK12 course with a friend at MIT using some shared annotation tools and social media, but mostly via telephone. The technology may have improved over the years, but I find it’s the “buddy system” that keeps me going more than anything else.

    The “C” in “MOOC” could mean a lot of things. I hardly know what “course” means anymore in the context of a MOOC. I’m interested in flexible interactions with the right people and around a compelling topic. Lifelong learning comes in many shapes and sizes. Best of luck in your MOOC research!

  8. Flowers drive your emotions straight to the hearts of your loved acquaintances. Its colors project the liveliness of the sweet relations, its fragrances make you
    recall the great time you spent together. Flowers are, in short, a miniature of your inner traits that you always wished to let your loved ones know. Visit for more information.


  9. I’m enrolled in a MOOC through the for-profit company Udacity. The course is free. The learning outcomes are clear and project-focused. There is a uniform post-test (and tests throughout), and all the learners are highly motivated. The context for learning and the course focus is more helpful and relevant than any computer science course I had in college. There is a credential that learners can earn after successfully completing the course. Maybe this type of MOOC is unique or is only emerging?

  10. All on a sudden the ambience of your loved ones turn beautiful like never before, because you have conveyed your presence to them. Gifts
    symbolize your affection and passion for them, and they heartwarmingly recall you yet again. Visit for more details.


  11. Resulting the perfect celebration is on the cards, and flowers just add extra bit of colorfulness onto
    that. Indeed, it’s the creation from nature that compliments the priceless emotions that a heart hosts. In the form of mind blowing floral stuffs and other gifts, really has dome something incredible. The link at gives full details.


Comments are closed.