With all the excitement in the air about big data, analytics, and adaptive instruction, it is easy to imagine a future of complete automation. In this future, algorithms will choose what we will learn next, which specific resources we will interact with in order to learn it, and the order in which we will experience these resources. All the guesswork will be taken out of the process – instruction will be “optimized” for each learner.
There are many reasons to be deeply concerned about this fully automated future. One of the things that concerns me most about this vision of “optimized” instruction is its potential to completely undermine learners’ development of metacognitive skills and deprive them of meaningful opportunities to learn how to learn.
Like every other skill – from playing the piano to factoring polynomials to reasoning about the likely causes of historical events – learning how to learn requires practice. Learners need opportunities to plan out their own learning and select their own study strategies and learning resources. Learners need opportunities to monitor and evaluate the effectiveness of the strategies and resources they’ve selected in support of their own learning. Learners need to experience – and reflect on – a range of successes and failures in regulating their own learning in order to understand what works for them, and how they should approach the next learning task they encounter in school or life.
Some adaptive systems are designed specifically to take control of these metacognitive processes away from learners. These systems make decisions on behalf of the learner, monitoring what does and doesn’t appear to be working and updating their internal models and strategies. The processes by which these decisions are made are hidden from the learner, and are likely trade secret black boxes into which no reviewer can ever peer. At the end of the current reading, or video, or simulation, the system will present the learner with a “Next” button that hides all the complexity and richness of the underlying decision, and simply presents the “optimal” next learning activity to the learner.
Without meaningful opportunities to develop metacognitive skills, there is no reason to believe that learners will develop these skills. (Have you ever spontaneously developed the ability to speak Korean without practicing it?) A fully adaptive system likely never provides learners with the opportunity to answer questions like “What should I study next?”, “How should I study it?”, “Should I read this or watch that?”, or “Should I do a few more practice exercises?” It goes without saying that the ability to learn quickly and effectively is possibly the single most important skill a person living in the modern world can have. In this context, any potential short-term benefits of adaptive instruction seem like a poor trade.
Instead of designing technologies that make choices for students, we have an important opportunity to design technologies that explicitly support students as they learn to make their own choices effectively. Such technologies must respect learner agency, leaving key choices in their hands – even at the risk of learners making some suboptimal choices. (I should say that fully automated recommendations, like “Consider viewing this supplementary video,” fit within this framework to the degree that they respect learner agency.)
Of utmost importance, these new systems must provide learners with simple ways to reflect on the choices they make about their learning and the results of those choices. I believe that providing this kind of feedback, together with opportunities to reflect on what it means, will be a hallmark of future educational technologies that support radical improvements in learning.