learning analytics open content

From Actionable Dashboards to Action Dashboards

Dashboards in educational contexts are usually comprised of visualizations of educational data. For example, dashboards inside the LMS are often comprised of line charts, bar graphs, and pie charts of data like logins and quiz scores.

The primary goal of educational dashboard design is, ostensibly, for them to be “actionable.” In other words, a teacher, student, or administrator should be able to take an action after spending time interpreting a dashboard. For that to be possible, three questions must be answered in the affirmative.

  1. Does the visualization communicate something useful? (design relevance)
  2. Can faculty figure out what that useful bit of information is after studying the visualization? (data literacy)
  3. Can faculty figure out what to do based on that useful bit of information? (instructional design literacy)

Design Relevance

Let me use course design as a metaphor for educational dashboard design. All too often, course design starts with selecting a textbook. The course syllabus is then designed around the contents of the book (week 1 = chapter 1). Hopefully, the textbook comes with some homework problems, a quiz bank, or some other assignments that can be used to assess student learning. These also get scheduled on the syllabus. If it’s an accreditation year, faculty may need to reverse engineer the contents of their book and the assignments they’ve chosen to come up with some “learning outcomes” they can list on the syllabus.

This sequence of course design decisions is so common that the more appropriate way of designing a course is popularly referred to as “backward design.” Backward design begins by deciding what students should learn (choosing learning outcomes), then deciding what evidence they would have to present for you to believe they had learned it (designing assessments aligned with your learning outcomes), and then choosing activities for students to engage in that will support their learning (like reading a chapter, watching a video, participating in a debate, etc. aligned with your learning outcomes). “Backward design” begins from the goal rather than ending with it.

Similarly, dashboard design frequently begins with the question “what data are available to be visualized?” Once that question is answered, the discussion moves on to questions like “what are reasonable ways to visualize those data? Which would be easiest to understand? Which would be most beautiful?” Only rarely – if ever – is the question asked, “what do we hope a teacher will actually do after looking at this dashboard?” We seldom begin with the end in mind.

Faculty Literacies

Even when educational dashboard design begins with the end in mind, the effective use of dashboards still depends entirely on faculty’s levels of data literacy and instructional design literacy. Believe it or not, not all faculty are comfortable interpreting graphs. There’s no guarantee that even a majority of faculty can read “through” a visualization to grasp the useful information it’s trying to convey. And while all faculty will have had at least one math or quantitative literacy course, the overwhelming majority of them will have never received any instructional design training. This means they’re unlikely to be familiar with research suggesting which responses might be most effective in the context of whatever useful information they’ve gleaned from the dashboard.

What is an “Action Dashboard”?

Rather than making dashboards that are “actionable,” why not make dashboards of actions? In other words, what if we didn’t convert raw data into visualizations that we hope faculty have sufficient data literacy to interpret correctly, and then further hope that they also have the ID literacy necessary to do something useful with that information? What if dashboard designers applied the necessary data literacy and ID literacy upstream, in the design of the dashboard itself, and simply presented faculty with a list of specific actions they might consider taking? Here’s one example, presented two ways:

Current model: (1) Show a line graph of course activity.

A line graph of student course activity

(2) Hope that faculty can interpret it accurately. (3) Hope that faculty understand that a student not logging in regularly could indicate that they’re having some kind of trouble. (Yes, I realize that’s not what the data in my sample visualization show. But work with me here.) (4) Hope faculty decide that a reasonable thing to do would be to reach out to students who haven’t logged in for a while and check on how they’re doing.

New model: Create a list of students who faculty should check in on because they haven’t logged in for a while.

Contact These Students
David Wiley email / dismiss suggestion
David hasn’t logged in for 14 days. Consider checking in today to see how they’re doing.
Elaine Wiley email / dismiss suggestion
Elaine hasn’t logged in for 8 days. Consider checking in soon to see how they’re doing.

What information should be included in an action dashboard?

Everyone knows I’m a fan of five letter frameworks! The 5Ws you learned in school provide a great starting place for thinking about action dashboards.

  • What? Describe the specific action should faculty consider taking.
  • Why? Explain why they should consider taking it.
  • When? How soon should they decide whether or not to take action?
  • Who? Name each student with regard to whom the faculty should consider taking action.
  • Where? In what format, with what tools, or in what place should faculty take the action?

From my personal perspective (and based on the research on the impact of teacher-student relationships), I think action dashboards should be filled with suggested actions that help faculty proactively express care, support, and encouragement for their students. An action dashboard might suggest faculty Contact These Students because they haven’t logged in for over a week (“Everything ok?”), because they scored above 95% on a recent quiz (“You’re crushing it!”), because they’re struggling in class (“Could you come to office hours so I can help you?”) and for a range of other reasons. If these messages were templated, faculty could send more messages of care, support, and encouragement to more students more quickly.

An action dashboard might also suggest that faculty Focus on These Topics During Class based on questions a majority of students missed on the homework. Or that faculty Review These Topics During Class to help students implement spaced review of critical concepts. Or other things.

Students, Administrators, and Power Users

There is nothing in this discussion of action dashboards from the faculty perspective that doesn’t also apply to students or administrators. It is completely unrealistic to expect them to have all the expertise necessary to translate a dashboard full of visualizations into effective actions they can take in support of student learning. Students and administrators need action dashboards, too.

It’s also true that, for faculty, students, and administrators who do have higher levels of data literacy and instructional design literacy, access to the visualizations (or even raw data) could be more powerful than an action dashboard. And I would advocate for providing those in an “advanced” view of the dashboard. My main argument here has been that, if we want dashboards to be widely used so they can improve outcomes for more students, the default view shouldn’t require high levels of data literacy or instructional design literacy in order to be usable. The default view should be immediately usable by everyone.

(It’s a bit like a Mac – if you have a high degree of computer literacy, you can open the Terminal app and type Unix commands to your heart’s content. But that shouldn’t be the default user experience. The default UX should be – and is – point and click.)


An “action dashboard” is a dashboard filled with specific actions a user might consider taking, presented in the context of the 5Ws. Action dashboards can be used effectively by all faculty, students, and administrators, regardless of their levels of data literacy or instructional design literacy. While action dashboards can feel restrictive to power users, power users can be provided with different views more suited to their literacy levels.

learning analytics lumenlearning

Thoughts on Continuous Improvement and OER

Recently I’ve been doing both more thinking and more roll-up-your-sleeves working on continuous improvement of OER. Below I’m cross-posting two short pieces on this topic I recently published on Lumen’s site (here and here).

Improvement in post secondary education will require converting teaching from a solo sport to a community-based research activity. (Herbert A. Simon, 1986)

Photo by Perry Grone on UnsplashThe faculty Lumen work with carry an enormous workload. Some have research, grant writing, and publication responsibilities in addition to teaching their courses. Some teach five or six courses per semester. Some have committee assignments and additional service responsibilities. Some drive across town several times per day as they try to string adjunct appointments at three institutions together into a career that pays the rent. All of our faculty have expertise in their discipline. Few have formal training in teaching or learning.

Herbert Simon, quoted above, was an “above average” faculty member. He won both the Turing Award for his work in computer science and the Nobel Prize for his work in economics. But even he realized that we can’t expect individual faculty to stay at the cutting edges of their discipline, teaching and learning practice, educational research, and the ever-changing technologies that can be used in the service of learning. This is why Simon called for us to come together as a community – there are countless ways in which education needs to be improved, and no one person, institution, or organization has the time or expertise to do it all alone. We need each other.

The role Lumen is choosing to play in the community working to improve education is to enable and empower learners and faculty with highly effective learning materials that become more effective every semester. And this process of making OER more effective every semester – also known as “continuous improvement” – is where we see some of the most exciting opportunities to collaborate with faculty.

Continuous improvement is an iterative cycle. In the case of OER, the continuous improvement cycle involves:

  • Creating or selecting OER for use in your course,
  • Instrumenting the OER for measurement,
  • Measuring the effectiveness of OER in supporting student learning,
  • Identifying areas where student learning was not effectively supported,
  • Making changes to the learning design of the OER in those underperforming areas, and
  • Beginning the cycle again.

Developed with funding from the Bill and Melinda Gates Foundation, Lumen’s Waymaker courses are designed specifically to support this continuous improvement process, and we have been refining our process for several years in collaboration with a small group of faculty. You can see an example of the difference in OER before and after we applied this internal continuous improvement process here:

While we’re still refining the tools we’ve created to support this work, we are now eager to open our continuous improvement process to all faculty members, with the goal of making it a genuinely community-based research activity. Here’s what we’re doing this fall:

  • We have analyzed data from Spring 2018 to empirically determine which learning outcomes students struggled with the most in five Waymaker courses. (Learn more about this process in this accompanying blog post.)
  • For each course, we have published a collection of “Learning Challenges  Leaderboards” listing the learning outcomes students struggled with the most, together with links to the OER that didn’t adequately support student learning.

The RISE and Shine Initiative

We invite you to engage with us in a community-based continuous improvement process. We’re calling this initiative RISE & Shine. RISE is the analysis that identifies which content needs work (you can read more about RISE here). Once we’ve identified that content, we invite faculty to Shine by contributing their expertise to the improvement of OER.

You can participate by taking one or more of these steps:

  1. Raise your hand. Complete this form to let us know you’d like to be part of conversations about improving learning with OER. We’ll share Learning Challenges updates and include you in what’s happening in your discipline.
  2. Reflect. Look at the Learning Challenges Leaderboard in your discipline. Think about what you do to make learning better for your students as you’re tackling these challenging topics, and compare that with the approach taken in the aligned OER. How would you do things differently?
  3. Share ideas. Have ideas about how we should make the OER supporting these difficult topics more effective? Share them here.
  4. Share improvements. Do you have a short video, an interactive activity, an edited version of the existing OER, or any other improved content you’ve developed to improve your students’ understanding? If so, submit them using this form. Whenever your contributions are included in Lumen course materials, your work is attributed. And you’ll be able to see the effect your contributions have on student learning in the next semester’s Learning Challenges Leaderboard update.

At Lumen we’re serious about making improving education a community-based research activity. That’s why we collaborate with faculty throughout the course improvement process, openly license the improvements we make to content, publish our continuous improvement frameworks in open access journals, and open source many of the tools we create to support our continuous improvement efforts.

However, we’re just one company. Truly transforming education will require more people and organizations to adopt a continuous improvement mindset. Given the amount of effort and the range of expertise required to engage in continuous improvement, Simon’s admonition to do this work collaboratively resonates with us as being deeply true.

We hope you’ll become part of this community-based effort with us.

The potential of the “data revolution” in teaching and learning, just as in other sectors, is to create much more timely feedback loops for tracking the effectiveness of a complex system. In a field where feedback is already well established as a vital process for both students and educators, the question is how this potential can be realized through effective human-computer systems (Buckingham Shum and McKay, 2018).

Open educational resources (OER) are educational materials whose copyright licensing grants everyone free permission to engage in the 5R activities, including making changes to the materials and sharing those updated materials with others. Consequently, everyone who wants to continuously improve OER has permission to do so. (Not so with traditionally copyrighted materials, whose licensing allows only the rightsholder to alter and improve the content.)  Permission to make changes is a necessary – but not sufficient – condition for continuous improvement.

In addition to permission to make changes, improvement requires a capacity for measurement. We can say we’ve changed OER without measuring the impact of those changes, but we can only say we’ve improved OER when we have measured student outcomes and confirmed that they have actually changed for the better.

Continuous improvement of OER, then, is the iterative process of:

  • Instrumenting OER for measurement,
  • Measuring their effectiveness in supporting student mastery of learning outcomes,
  • Identifying areas where student mastery of those learning outcomes was not effectively supported,
  • Making changes to the learning design of the underperforming OER aligned to those learning outcomes, and then
  • Beginning the cycle again so we can:
    • Measure the impact of those changes and determine whether or not they were actually improvements (not just changes), and
    • Identify additional areas that need strengthening.

Engaging in the continuous improvement of OER in this manner allows us to make OER support learning more effectively each semester.

Learning Design and Continuous Improvement

Lumen instruments OER for measurement at the individual learning outcome level. Outcome alignment is at the very core of both our learning design process and our continuous improvement process. The outcome alignment process has three parts.

A visualization of the relationships between the more than 250 learning outcomes in Waymaker Microeconomics

A visualization of the relationships between the more than
250 learning outcomes in Waymaker Microeconomics

First, we collaborate with faculty to identify each of the individual skills we want to support students in mastering. These detailed outcomes are, like all the content Lumen creates, licensed CC BY. Second, we align each individual page of content with the one or more outcomes whose mastery it supports. Finally, we align each assessment item with the outcome it is designed to assess. In the case of Waymaker Microeconomics, for example, that means aligning over 2,350 individual assessment items appearing in pre-tests, interactive practice opportunities, self-checks, and end of module quizzes with the appropriate learning outcome.

If that sounds like an incredible amount of work, that’s because it is!

But it’s worth it. In addition to providing benefits in the learning design process that we don’t discuss here, outcome alignment is fundamental to the continuous improvement process. With assessment items aligned to individual outcomes in pre-tests, practices, self-checks, and end-of-module quizzes, we can model learning over time, from the beginning of the module (the pre-test occurs before students see any OER) to the second attempt on the end of module quiz (after students have used and reused the OER). Similarly, because all course content is outcome-aligned, we can examine how patterns of OER usage correlate with performance on aligned assessments.

Analyzing the Effectiveness of OER

This process begins with a RISE analysis. I published the RISE framework last year with Bob Bodily and Rob Nyland, two amazing PhD students at BYU. Earlier this year I also published an open source implementation of RISE in the Journal of Open Source Software. RISE analysis divides performance on assessments into two categories, higher and lower, and usage of OER into the same two categories, higher and lower. These are matrixed to create four ways of diagnosing how OER are working in support of student learning.

Higher Grades High student prior knowledge, inherently easy learning outcome, highly effective content, poorly written assessment Effective resources, effective assessment, strong outcome alignment
Lower Grades Low motivation or high life distraction, too much material, technical or other difficulties accessing resources Poorly designed resources, poorly written assessments, poor outcome alignment, difficult learning outcome
Lower Use of OER Higher Use of OER

Each outcome in the course is placed in one of these four categories, as in the visualization below. We focus first on those outcomes in the lower right corner, where usage of OER is high but performance on aligned assessments is low. These are places where effort invested in improving OER is most likely to improve student learning. Below we have drawn a blue diamond three standard deviations out from the origin (mean OER usage on the x-axis and mean assessment performance on the y-axis) to make it easier to visually identify outliers in need of immediate attention.

RISE analysis visualization of Introduction to Business

Making Targeted Improvements to OER

In the past, once the OER most in need of improvement were identified, we reached out to individual faculty to invite them to participate in the process of analyzing and improving course materials in collaboration with Lumen’s learning engineers and course designers. Moving forward, we will use the Learning Challenges Leaderboards to make this information public and invite the community to participate in the process of revising, remixing, finding, or creating new OER to better support student learning.

(In addition to continuously improving the OER based on outcomes data, we also make a wide range of other updates to our courses. For example, we update OER based on faculty feedback, current events, and the availability of new OER. We make improvements to assessments based on the results of item analysis, make improvements to features of the Waymaker platform (like faculty and student nudges) based on ways they correlate with student performance, and make improvements to supplementary materials based on faculty feedback.)

The Role of Learning Materials in Education

It would be easy to look at the effort Lumen invests in improving OER and other courseware components and come to the conclusion that we think learning materials are the most important part of education. That would be a mistake. We believe deeply that the contributions made by the learner and the faculty both significantly outweigh the importance of learning materials. However, we also believe that highly effective learning materials can dramatically amplify the efforts of learners and faculty. For example, we know that highly effective learning materials can help learners reach the same levels of mastery in half the time compared to materials that follow a traditional textbook design (Lovett et al., 2008).

There are myriad ways in which education needs to be improved. The role Lumen is choosing to play in the community working to improve education (which extends far beyond problems relating to learning materials) is to enable and empower learners and faculty with highly effective learning materials that become more effective every semester.

We’re working to engage a broad community of educators and institutions in the work of improving education by continuously improving OER course materials. We’re trying to make this complex task more transparent, measurable, and participatory. Given the creativity and commitment of the community we serve, we have every hope of success.

learning analytics lumenlearning shuttleworth visualizations

Automatically Geocoding Higher Ed Institutions Using the Google Maps API and Google Spreadsheets

I recently needed to quickly create a map of higher education institutions Lumen is working with, and consequently needed LAT and LONG info for dozens of schools. Rather than do that all by hand, I created this little recipe for automatically retrieving coordinates given a school’s name using the Google Maps API and Google Spreadsheets. Here’s a demonstration of the recipe using a list of all the higher education institutions where I’ve taught:

I fully realize I’m no Tony Hirst, but thought this was interesting enough to share.