Categories
learning analytics open content

From Actionable Dashboards to Action Dashboards

Dashboards in educational contexts are usually comprised of visualizations of educational data. For example, dashboards inside the LMS are often comprised of line charts, bar graphs, and pie charts of data like logins and quiz scores.

The primary goal of educational dashboard design is, ostensibly, for them to be “actionable.” In other words, a teacher, student, or administrator should be able to take an action after spending time interpreting a dashboard. For that to be possible, three questions must be answered in the affirmative.

  1. Does the visualization communicate something useful? (design relevance)
  2. Can faculty figure out what that useful bit of information is after studying the visualization? (data literacy)
  3. Can faculty figure out what to do based on that useful bit of information? (instructional design literacy)

Design Relevance

Let me use course design as a metaphor for educational dashboard design. All too often, course design starts with selecting a textbook. The course syllabus is then designed around the contents of the book (week 1 = chapter 1). Hopefully, the textbook comes with some homework problems, a quiz bank, or some other assignments that can be used to assess student learning. These also get scheduled on the syllabus. If it’s an accreditation year, faculty may need to reverse engineer the contents of their book and the assignments they’ve chosen to come up with some “learning outcomes” they can list on the syllabus.

This sequence of course design decisions is so common that the more appropriate way of designing a course is popularly referred to as “backward design.” Backward design begins by deciding what students should learn (choosing learning outcomes), then deciding what evidence they would have to present for you to believe they had learned it (designing assessments aligned with your learning outcomes), and then choosing activities for students to engage in that will support their learning (like reading a chapter, watching a video, participating in a debate, etc. aligned with your learning outcomes). “Backward design” begins from the goal rather than ending with it.

Similarly, dashboard design frequently begins with the question “what data are available to be visualized?” Once that question is answered, the discussion moves on to questions like “what are reasonable ways to visualize those data? Which would be easiest to understand? Which would be most beautiful?” Only rarely – if ever – is the question asked, “what do we hope a teacher will actually do after looking at this dashboard?” We seldom begin with the end in mind.

Faculty Literacies

Even when educational dashboard design begins with the end in mind, the effective use of dashboards still depends entirely on faculty’s levels of data literacy and instructional design literacy. Believe it or not, not all faculty are comfortable interpreting graphs. There’s no guarantee that even a majority of faculty can read “through” a visualization to grasp the useful information it’s trying to convey. And while all faculty will have had at least one math or quantitative literacy course, the overwhelming majority of them will have never received any instructional design training. This means they’re unlikely to be familiar with research suggesting which responses might be most effective in the context of whatever useful information they’ve gleaned from the dashboard.

What is an “Action Dashboard”?

Rather than making dashboards that are “actionable,” why not make dashboards of actions? In other words, what if we didn’t convert raw data into visualizations that we hope faculty have sufficient data literacy to interpret correctly, and then further hope that they also have the ID literacy necessary to do something useful with that information? What if dashboard designers applied the necessary data literacy and ID literacy upstream, in the design of the dashboard itself, and simply presented faculty with a list of specific actions they might consider taking? Here’s one example, presented two ways:

Current model: (1) Show a line graph of course activity.

A line graph of student course activity
From https://community.canvaslms.com/docs/DOC-17996-how-do-i-compare-the-weekly-online-activity-chart-graph-with-a-section-or-student-filter-in-new-analytics

(2) Hope that faculty can interpret it accurately. (3) Hope that faculty understand that a student not logging in regularly could indicate that they’re having some kind of trouble. (Yes, I realize that’s not what the data in my sample visualization show. But work with me here.) (4) Hope faculty decide that a reasonable thing to do would be to reach out to students who haven’t logged in for a while and check on how they’re doing.

New model: Create a list of students who faculty should check in on because they haven’t logged in for a while.

Contact These Students
David Wiley email / dismiss suggestion
David hasn’t logged in for 14 days. Consider checking in today to see how they’re doing.
Elaine Wiley email / dismiss suggestion
Elaine hasn’t logged in for 8 days. Consider checking in soon to see how they’re doing.

What information should be included in an action dashboard?

Everyone knows I’m a fan of five letter frameworks! The 5Ws you learned in school provide a great starting place for thinking about action dashboards.

  • What? Describe the specific action should faculty consider taking.
  • Why? Explain why they should consider taking it.
  • When? How soon should they decide whether or not to take action?
  • Who? Name each student with regard to whom the faculty should consider taking action.
  • Where? In what format, with what tools, or in what place should faculty take the action?

From my personal perspective (and based on the research on the impact of teacher-student relationships), I think action dashboards should be filled with suggested actions that help faculty proactively express care, support, and encouragement for their students. An action dashboard might suggest faculty Contact These Students because they haven’t logged in for over a week (“Everything ok?”), because they scored above 95% on a recent quiz (“You’re crushing it!”), because they’re struggling in class (“Could you come to office hours so I can help you?”) and for a range of other reasons. If these messages were templated, faculty could send more messages of care, support, and encouragement to more students more quickly.

An action dashboard might also suggest that faculty Focus on These Topics During Class based on questions a majority of students missed on the homework. Or that faculty Review These Topics During Class to help students implement spaced review of critical concepts. Or other things.

Students, Administrators, and Power Users

There is nothing in this discussion of action dashboards from the faculty perspective that doesn’t also apply to students or administrators. It is completely unrealistic to expect them to have all the expertise necessary to translate a dashboard full of visualizations into effective actions they can take in support of student learning. Students and administrators need action dashboards, too.

It’s also true that, for faculty, students, and administrators who do have higher levels of data literacy and instructional design literacy, access to the visualizations (or even raw data) could be more powerful than an action dashboard. And I would advocate for providing those in an “advanced” view of the dashboard. My main argument here has been that, if we want dashboards to be widely used so they can improve outcomes for more students, the default view shouldn’t require high levels of data literacy or instructional design literacy in order to be usable. The default view should be immediately usable by everyone.

(It’s a bit like a Mac – if you have a high degree of computer literacy, you can open the Terminal app and type Unix commands to your heart’s content. But that shouldn’t be the default user experience. The default UX should be – and is – point and click.)

Conclusion

An “action dashboard” is a dashboard filled with specific actions a user might consider taking, presented in the context of the 5Ws. Action dashboards can be used effectively by all faculty, students, and administrators, regardless of their levels of data literacy or instructional design literacy. While action dashboards can feel restrictive to power users, power users can be provided with different views more suited to their literacy levels.

Categories
open content

On the Impossibility of the Community-based Production of Learning Content

UPDATE: I borrowed the “community based” language in the title of this post from Martin’s blog, which reminded me of Yochai’s article and prompted this post. That language has caused confusion on social media. (Long-time readers of this blog will be surprised to learn that definitions matter!) I should have used Yochai’s language of “peer production of educational materials” from the start. Perhaps that would have headed off some of the misunderstanding on Twitter. Perhaps.

In a post this morning, Martin wrote, “We’ve still not really cracked a community based production model for learning content.” It got me thinking.

Back in 2005 I was blessed with the opportunity to commission a short paper from Yochai Benkler (who did much of the first serious work on the economics of open source software development, e.g., Coase’s Penguin and Sharing Nicely) in conjunction with a keynote talk he gave at OpenEd that year. The paper he produced, Common Wisdom: Peer Production of Educational Materials, is what I believe to be one of the most important and least known writings in the first, formative decade of open education as we know it today.

Benkler’s main argument in the paper focuses on the relationship between modularization and integration. (The argument may sound familiar to readers who have encountered the reusability paradox.) He points out that the number of people who will volunteer to contribute to a project is directly proportional to how small the “smallest unit of contribution” is. If a contribution can be made in a few minutes, many people might be willing / able to contribute to producing learning content. If a contribution requires a minimum of a few hours, far fewer people may be willing /able to contribute. “As I have elsewhere discussed in great detail, the size of the potential pool of contributors, and therefore the probability that the right person with the right skills, motivation, and time will be available for the job is inversely related to the granularity of the modules” (pp. 21 – 22). Schweik and English would later empirically demonstrate that this is true in the context of open source software.

If modularization and its effect on the availability of volunteers is one side of the problem, the other is the leadership, administration, and integration necessary to bring a very large collection of very small pieces together into a useful whole. “Integrating and smoothing out the text, style, and coherent structure of a chapter from contributions in much smaller tasks becomes much harder. The result of making the modules more fine grained may be to make the integrated whole too difficult to render coherent” (p. 20).

He summarizes: “The larger the granules the more is required of each contributor, the smaller the set of agents who will be willing and able to take a crack at the work. On the other hand, the granularity is determined by the cost of integration—you cannot use modules that are so fine that the cost of integrating them is higher than the value of including the module. The case of textbooks seems to be, at present, precisely at the stage where the minimal granularity of the modules in some projects—like FHSST—is too large to capture the number of contributions necessary to make the project move along quickly and gain momentum, whereas the cost of integration in others, like WikiBooks, is so high that most of the projects languish with a module here, and module there, and no integration” (pp. 21-22).

(The one place where I would push back on Yochai’s analysis is in what he sees as the difference between the adopters of K-12 textbooks and college textbooks. While K-12 textbooks need high degrees of coherence and must be created in accordance with existing standards before they can be adopted, he proposed that the same wasn’t true for higher education. It’s generally up to post-secondary teachers “to construct, integrate, and use the materials as fits their needs. No higher order organization is required, and none therefore represents a barrier to contribution” (p. 23). Meaning that you could simply have a community create lots of very small pieces without requiring any centralized integration service, because every faculty member will undertake that work on their own and do it in a way that meets their specific local needs. While this statement might reflect the reality of faculty who teach upper-level undergraduate and graduate courses, it’s also true that over half of US college and university courses are taught by adjunct faculty who need a complete, coherent learning resource that is ready to pick up and teach from on day one. So, for the foreseeable future, the problems of modularization and integration apply to higher ed as well as K-12.)

The problems associated with the need to modularize and the need to integrate are just as real now as they were back in the early 2000s. (And, for those of you who worked on learning objects, in the 1990s.) This means that “we’ve still not really cracked a community based production model for learning content” is likely a dramatic understatement of the problem. There’s a good argument to be made that a community based production model for learning content isn’t actually possible. Yes, it might be possible to set up a system where some people will contribute small pieces of learning content to a repository, but for the reasons described above those small pieces will never see adoption at scale due to problems relating to integration and coherence. And we should consider any production model that results in the creation of learning content that goes unused to be a failed model.

Categories
open content

Learning Engineering and Reese’s Cups

Reposting this message I sent to the Learning Analytics mailing list earlier this morning.

When I hear people say “learning engineering” I hear them talking about Reese’s cups.

I hear them talking about delicious chocolate (instructional design, or applied learning science or whatever you like to call it) and yummy peanut butter (learning analytics, or educational data mining, or whatever you like to call it). Chocolate and peanut butter are two things that, individually, taste great. And they taste even better together. In fact, they taste so much better together that people gave the combination its own name! They didn’t give this heaven-sent sweetie its own name in order to exercise dominance over either the chocolate or peanut butter industries. It was just really convenient to have a specific name to talk about this utterly fantastic combination of things. “I want a Reese’s cup!”

As I understand it, learning engineering is nothing more or less than a specific way of combining ID/ALS and LA/EDM techniques in order to engage in the iterative, data-driven continuous improvement of products designed to support learning:

  • You design something intended to support student learning (could be content, software, courseware, whatever).
  • You put it in the field and get students using it.
  • You measure its success at supporting student learning using a variety of analysis techniques.
  • You zero in on the parts that aren’t supporting student learning as successfully as you had hoped they would.
  • You re-design them.
  • You re-deploy them.
  • You re-analyze the degree to which they successfully support student learning.
  • You rinse and repeat.

That’s how I understand “learning engineering.” I could just as easily say, “the combination of specific instructional design and learning analytics techniques in support of iterative, data-driven continuous improvement.” Well, actually, no I couldn’t say that just as easily. 🙂