Writing is Thinking: The Paradox of Large Language Models

Last week I had the amazing opportunity to speak at the 3rd Annual AI Summit at UNC Charlotte. The entire event was wonderful and the organizing team were terrific. My keynote wasn’t recorded, so I thought I would serialize it across a series of blog posts. This post is the first in that series, and this section of the talk was titled Writing Is Thinking. David McCullough said, “Writing is thinking. To write well is to think clearly. That’s why it’s so hard… We all know the old expression, ‘I’ll work my thoughts out on paper.’ There’s something about the pen that focuses the brain in a way that nothing else does.” ...

May 20, 2025 · David Wiley

Making AI a More Effective Teacher: Lessons from TPACK

Human Teachers and AI Teachers Would you be surprised if you pulled a random person off the street, shoved them into a classroom full of students, and then found that they weren’t a particularly effective teacher? Of course not. And why wouldn’t that be surprising? Because effective teaching requires a great deal of knowledge and skill, and the person you pulled off the street most likely had no relevant training. ...

March 24, 2025 · David Wiley

Why It Might Be Impossible to “AI-Proof” Written Assignments (and What We Can Do About It)

A significant amount of time, effort, and resources go into training large language models (LLMs) to follow instructions. In fact, after the initial pre-training step, many models are specifically instruction-tuned in order to make them better at following instructions. If you’ve ever been poking around Huggingface and wondered why some models have “Instruct” in their name (like Llama-3-8B vs Llama-3-8B-Instruct), this is why. While a wide range of prompt engineering frameworks exist, they all have one thing in common: they help you write clear, detailed, thorough, accurate instructions for an LLM to follow. LLMs can complete simple tasks given only simple instructions (“Write a poem about a sunny day”), but in order to complete more complicated tasks they need more detailed instructions (e.g., see this 820 word ‘Updated Tutoring Prompt’ by Ethan Mollick that instructs the LLM to act as a tutor). Because many models are specifically instruction-tuned as part of their training process, clearer instructions generally result in better outputs from the model. ...

July 1, 2024 · David Wiley

The Symmetrical Power of AI in Assessments

Large language models (LLMs) make it possible for faculty to rapidly create a wide range of formative and summative assessments for their students. And, as we hear about so often, students can also use LLMs to write their essays and complete other assignments. (Apparently, when faculty use AI to create assignments, it’s a “productivity gain.” But when students use AI to complete assignments, it’s “cheating.” But that’s a topic for another day.) Reflecting on several conversations I’ve been part of at the SUNY CIT conference this week led me to realize an important principle about the symmetry of AI in assessment. LLMs are equally powerful tools for both faculty and students. Speaking solely about what is technologically possible (and not what is ethically appropriate), we might summarize this principle by saying: ...

May 22, 2024 · David Wiley

An "AI Student Agent" Takes an Asynchronous Online Course

The earlier we all start thinking about this problem, the sooner we can start generating ideas and potential solutions. Given the magnitude of impact generative AI is having and will have in education (and many other aspects of life), I’m working with some diligence to keep up to date with developments in the field. Recently, I noticed how a couple of the emerging capabilities of generative AI will come together in the future in a way that will impact education much more dramatically than I am hearing anyone talking about currently (if I’m missing this conversation somewhere, please help me connect to it!). But before I give away the punch line, let me share the individual pieces. Maybe you’ll see what I saw. ...

April 18, 2024 · David Wiley

The Near-term Impact of Generative AI on Education, in One Sentence

Preparing to participate in a panel on generative AI and education at this week’s AECT convention gave me the excuse to carve out some dedicated time to think about the question, “how would you summarize the impact generative AI is going to have on education?” This question is impossible to answer over the medium to long-term, but maybe I could give an answer addressing the near-term? My approach to this question was to look for a different, comparable example and try to work my way into the question from that more familiar territory. The internet seems like the obvious choice here, as no other recent advance can even begin to compare to the potential impact generative AI will have. ...

October 17, 2023 · David Wiley

Teaching Assistants that Actually Assist Instructors with Teaching

Last week I asked a “What if?” question about the way generative AI might change the ways that learners interact with instructional materials like textbooks. This week I’d like to ask another. I’m at the GRAILE workshop on AI and higher education in Denver today, and sitting here in this space I’m hearing things through a slightly different filter. For example, when someone mentioned teaching assistants earlier this morning, it made me think - what if generative AI could provide every instructor with a genuine teaching assistant - a teaching assistant that actually assisted instructors with their teaching? ...

July 13, 2023 · David Wiley

OER / ZTC Advocates Have an AI Problem

At some point over the last decade, open educational resources (OER) advocacy in US higher education became zero textbook costs (ZTC) advocacy. The two are intertwined now in a manner that would be difficult to disentangle even if you wanted to try. There are plenty of practical reasons why this might have happened. For example, politicians understand costs much better than they understand learning, which makes policy work and other political advocacy around eliminating textbook costs far easier than advocating for ways that “open” (whatever that word means) might be leveraged to improve student outcomes. But OER / ZTC advocates have had a fundamental problem simmering for many years now, and the recent advent of large language models (LLMs) like GPT-4 will quickly bring that simmer to a boil. ...

March 21, 2023 · David Wiley

The Difference Between an Informational Resource and an Educational Resource

Recently I’ve been thinking about the difference between an informational resource and an educational resource. I’ve had the sense that an educational resource is an informational resource with a little something extra and have enjoyed coming back to this thought again and again over the last several weeks, trying to reduce this “something extra” to its simplest form. Keeping the discussion informal, it seems that an informational resource is simply a compilation or collection of information - ideas, facts, processes, procedures, &c. I think of an encyclopedia as being the quintessential information resource - comprehensive, accurate, and well-organized. If you accept that definition (for sake of this argument), what would need to be added to an informational resource to make it an educational resource? ...

December 10, 2021 · David Wiley