Writing is Thinking: The Paradox of Large Language Models

Last week I had the amazing opportunity to speak at the 3rd Annual AI Summit at UNC Charlotte. The entire event was wonderful and the organizing team were terrific. My keynote wasn’t recorded, so I thought I would serialize it across a series of blog posts. This post is the first in that series, and this section of the talk was titled Writing Is Thinking.

David McCullough said, “Writing is thinking. To write well is to think clearly. That’s why it’s so hard… We all know the old expression, ‘I’ll work my thoughts out on paper.’ There’s something about the pen that focuses the brain in a way that nothing else does.”

Do you disagree?

Apparently, Plato disagreed. We frequently hear in the debates about AI that Plato thought that writing, if it became widespread, would move society backward instead of forward. But any time we hear these secondhand summaries of someone’s writing, I think it behooves us to go read the original (or, at least a translation of the original). So here’s a relevant section from Phaedrus (and yes, I actually read the extended quote to the Summit participants):

(Socrates to Phaedrus): Well, I heard that at Naucratis in Egypt there was a certain ancient god of that place, whose sacred bird is the one they call the Ibis, while the name of the divine being himself was Theuth. He was first to discover number and calculation, geometry and astronomy, and also draughts and dice, and of course writing. Now at that time, Thamus was King of all Egypt round about the great city of the upper region. The Greeks call this city Egyptian Thebes and they refer to Thamus as Ammon. Theuth went to this King to show off his discoveries, and he proposed that they should be passed on to the rest of the Egyptians, and Thamus asked what benefit each of them possessed, and as Theuth explained this he praised whatever seemed worthwhile and criticised whatever did not. Now Thamus is said to have expressed many views both positive and negative to Theuth about each of the skills, so an account of these would be quite lengthy. But when he came to writing, Theuth said, “This branch of learning, O King, will make the Egyptians wiser and give them better memories, for I have discovered an elixir of both memory and wisdom.” The King replied, “Oh most ingenious Theuth, one man is able to invent these skills, but a different person is capable of judging their benefit or harm to those who will use them. And you, as the father of writing, on account of your positive attitude, are now saying that it does the opposite of what it is able to do. This subject will engender forgetfulness in the souls of those who learn it, for they will not make use of memory. Because of their faith in writing, they will be reminded externally by means of unfamiliar marks, and not from within themselves by means of themselves. So you have discovered an elixir not of memory but of reminding. You will provide the students with a semblance of wisdom, not true wisdom. For having heard a great deal without any teaching they will seem to be extremely knowledgeable, when for the most part they are ignorant, and are difficult people to be with because they have attained a seeming wisdom without being wise.

So who is right – Plato or McCullough? Is writing a curse or a boon? There’s actually not as much conflict between the two statements as there might appear. Plato is talking about writing’s effect on memory, while McCullough is talking about its effect on thinking. While related, these are definitely two different things. (But asking the “who’s right?” question and then giving participants some time catalyzed some energetic small group conversations.)

The question implied by those who invoke Plato in conversations about AI is, “was what we gave up worth more or less than what we got in exchange?” Or in other words, would we trade all that we’ve gained from writing over the millennia to regain access to the prodigious individual capacities for memory our ancestors had?

Recently I’ve been pondering what I think of as “the paradox of large language models.” The paradox of large language models is that you have to write for them in order to get them to write for you. We’re all familiar with the phrase “garbage in, garbage out.” If you write a prompt that is vague, ambiguous, disorganized, and unfocused, the model will give you output with those same characteristics. When a person uses an LLM for the first time and has a poor experience (“I knew this AI hype was all overblown exaggeration!”), the reason is often attributable to poor prompting on their part as opposed to a weakness in the model. Using an LLM for all but the most trivial tasks requires writing that is clear, specific, focused, well-organized, etc. And the more complex the task you want the LLM to perform, the more effective and powerful your writing has to be.

Now, instructors might interrupt here to ask, “If that’s true, then how are my students – many of whom are such immature writers – able to use AI to produce ‘A’ work on my writing assignments?” I love this question. Take a moment to reflect on what the answer to this riddle might be.

The answer, of course, it that students are using your assignments as their prompts! And – hopefully – the instructions for your assignments are written in a manner that is clear, specific, focused, and unambiguous.

Consequently, if you have a student who says something like, “Why do I need to master the core concepts of this course? AI can do all my work for me both now and after graduation!” the answer is: “After you graduate, there won’t be anyone there to write your prompts for you – you’ll have to write them yourself. When you try to use AI the first day on your new job you’ll have to understand the domain well enough to know what to ask the AI to do – using the right vocabulary, in the right way, with enough clarity and specificity to get a quality result. And if you don’t have the knowledge and skills you need to write that effectively, your first day on the job might be your last.”

Thinking about the importance of writing going forward – specifically, understanding that it’s a critical skill for the effective use of LLMs – makes me wonder if we’re not going to see a new mode of writing taught in our English Composition courses. In an English Composition class today we often learn about expository writing, persuasive writing, descriptive writing, etc. Maybe this new mode will be called “generative writing?” Whatever it’s called, writing for LLMs is different from other modes of writing. First, the audience is different – we’re writing for LLMs and not humans. And second, we’re engaged in some novel combination of process analysis and persuasive writing, trying to explain to the model what we want it do and actually get it to do it. Not only is generative writing unlike any other kind of writing we teach currently, it’s also probably the most economically valuable mode of writing a student could learn today.

I think calling this “generative writing” is far more useful than calling it “prompt engineering,” since the former connects us to a rich body of literatures and traditions and scholarship of teaching, while the latter does not.