A Little Thought Experiment

Suppose a faculty member decides she wants to provide some positive reinforcement to students in her class next semester. She decides that each time a student scores 80% or higher on an exam, she’ll send them an email congratulating them and encouraging them to keep up the good work. Now, she has to decide how to send these messages. After a little thought, she decides she has four options:

  1. Review the gradebook each Saturday, find everyone who meets the criterion, and send them each an email.
  2. Prewrite a series of appropriate emails and store them in a text document. Review the gradebook each Saturday, find everyone who meets the criterion, and send each of them one of the prewritten messages.
  3. Prewrite a series of appropriate emails and store them in a text document. Write a script that parses the gradebook each Saturday and generates a list of people who meet the criterion. Send one of the prewritten messages to each person on the list generated by the script.
  4. Prewrite a series of appropriate emails and store them in a text document. Write a script that parses the gradebook each Saturday, generates a list of people who meet the criterion, and sends each of them one of the prewritten messages.

As she considers these four options, our faculty member wants to ensure that students are actually receiving a message “from their teacher” and that students will interpret the messages as such.

Which method(s) of sending the messages meets this standard? Why?

 

Comments on this entry are closed.

  • Pedagogical metaphysics: awesome.

    For my part, number four is a perfectly acceptable course of action, though those of a persuasion will no doubt prefer the intentionality and care (or does this really mean ‘time’, I wonder…) of one of the first two.

    I will bring this up with my (grade twelve) philosophy class tomorrow; we were talking a little about artificial intelligence and uploaded consciousnesses today (what constitutes a person? A mind? etcetera) – I’m sure they’ll get a kick out of this!

    I will option five: She can write a script to analyze and produce a logarithmic summary of encouraging feedback and constructive criticism of student engagement with the course. Sign the auto generated email, “From your teacher.”

  • Hello, I’m not sure if there’s a received wisdom on this, but here’s how it sits with me.

    The first option meets the standard robustly, the second one barely, and the other two (for me) don’t. The reason being that the communication isn’t just about the drafting of the text by the teacher, but about the communicative channel opened in the moment of sending it that ensures that it’s from one person to another at a specific moment. The two problems with the automation of the encouragement are that the intent is proxied to a script; and the prewritten text is able to be sent indifferently and repeatably.

    So the administrative challenge becomes interesting. Human capacity being limited, the first option can’t scale. This suggests a showdown between authenticity and efficiency.

    I don’t really object to the third and fourth option; what I think makes them ethically messy solutions is their pretend status. Email is bad enough, but faux email? Why not make them badges and be done?

    I’m not sure why this puzzle hit me so forcefully when it came across my timeline, but whatever it was (a communication sent out with no particular intent to be addressed to me) now has this response (a communication sent back with a particular intent to be addressed to this question).

    Internet of humans.

    Best
    Kate

  • Paul-Olivier Dehaye

    Harder:
    Prewrite a series of script, taking parameters. Each Saturday, the instructor sets which script and which parameters will be used to select the students, and sends pre-written email. For instance: those students who interacted with less than 50% of the content and scored less than 60% on the quizz.
    Now what if the email is customized to the filter?

  • James DiGioia

    I’m curious why none of the options include the possibility of scripting the criterion matching step, which informs the teacher which students are above 80%, and pushes her to write bespoke messages for each matching student. She automates the tedious part of the task and let the teacher do the emotional work of connecting with and support her students.

  • I use an Excel spreadsheet as my grade book and Word to do an email merge. Word has lots of options for if-then rules, etc. Why spend time on script writing. Just select a column in the grade book, set a condition, if met the statement, then not then a different statement.
    What about encouraging the low performing students? How do you engage them to work harder?

  • “Why spend time on writing?” This is the critical question at the heart of so much automation of teaching work at the moment. Can we save time, and to what purpose? I use automated feedback comments for common writing problems because the value to the student increases: if I write it 50 times, the 50th student gets a very degraded version of the advice. If I write it out once and thoroughly, any student who needs that advice gets the full menu.

    But an email of encouragement strikes me as a different kind of thing. It’s intended either to be a personal message, or to masquerade as one. Political campaigning, marketing, all the discourses that structure our lives, and that we justly dismiss as inauthentic, reach for us with the mimicry of personal communication. “Dear Kate” doesn’t make it so.

    I think we owe it to ourselves and students to be very, very careful with the impersonation of presence, because something really fundamental is at stake, involving trust.

  • Pingback: Extending a little thought experiment | The Weblog of (a) David Jones()

  • Well to be honest, my comment may be influenced by the fact that I couldn’t write a script to do any of those things listed. But I’m very much inclined to agree with the notion that you need to be careful with “the impersonation of presence” when talking about e-mails of encouragement.

    When I send students notes of encouragement–and I do– I intentionally pick out characteristics of or statements in their writing that are unique to both their personality and their stage of development as a writer. I do this to make them aware that I recognize them individually and am aware of both their special strengths and struggles as writers. To put it more succintly, I find
    scripting notes of encouragement and sending them out kind of creepy, very different from having a pre-written response to say dangling modifiers.

    But what an interesting question. I’ll be thinking about this for days.

  • The standard is not defined well enough. It does not enunciate that the teacher has actually taken notice of the achievement that has met the 80% criterion. So from the point of view of ‘the teacher’ sending an email, any automatic method chosen to meet the criterion of sending a congratulatory email, no matter how detached it may be from the teacher, would meet the standard. However, an email ‘from the teacher’ is a key phrase in the criterion that also assumes the function of the teacher. In my book, a teacher must be aware of the achievements of each of his or her students. An automatic method that does not permit this to happen effectively will not be appropriate. In your four options, option 4 does not permit this to happen.

  • Tomer

    Staying within the discussion and not questioning whether encouraging the high-performing students is the most important thing, here’s my suggested option 5 (inspired by some of the comments here):

    Automatize *everything*, from identifying high-performing students to creating and sending out emails. The only difference is that it’s made explicit that the email is *not* sent from the teacher but from a computer program, and that the email is sent to both the student *and* the teacher. This way, there’s no pretending, and the student knows exactly what the teacher knows. The teacher can then respond to the email with a personal word of encouragement to the student.

  • I keep coming back to this conversation because it’s so interesting, and it’s immediately relevant to me as I prepare to meet another round of 200 students.

    And I also keep coming back because I’ve jumped to the conclusion that this blog and all the replies have been written by humans, sitting at their keyboards, in the act of thinking. I take this for granted, but for how long? I would truly, truly hate to find that an exchange of this kind was a Turing test. I also have no idea, if blog bots and comment bots were to become common, how I would demonstrate that I’m a human.

    So I’ve been very wary for a while of the use of chat bots and teacher bots, especially in MOOCs.

    This is the basis on which I have been thinking about the student experience of getting automated messages. In Australia, students will be familiar with a whole range of automated notifications (your phone credit is running low, it’s time to pay a bill, don’t forget to enrol) and they’ll be entirely familiar with the ruses of personalisation that accompany these. I find that the more I think about this question, the more I find myself valuing the integrity of presence — but worrying in a practical sense about scale.

    So I like Tomer’s option 5: automate the messaging, but make it clear that this is an automatic system notification, that either the teacher or the student can choose to follow up.

    (And like Tomer, I also don’t think encouraging high achieving students is always the most helpful strategy—including for the high achieving students.)

    • This is indeed fascinating. I can almost see David suppressing a massive grin as he writes this seemingly innocent prose 🙂

      The main counterargument against automated scripts seems to be that automating out human involvement may have unforeseen detrimental effects on either the student (who may become disengaged after seeing through the automation [bound to happen]) or the teacher (who may become distanced from the act of teaching, lose track of students’ performance, or decrease habits of engagement).

      Well, why not test those experimentally? A control and 1-4 experimental groups. A survey of attitudes at various times throughout the course could give us some qualitative data, and in a blended or online course we can track a variety of likely outcome variables (e,g. scores, quantity or quality of participation, student-student vs student-teacher interactions, etc).

      But stepping back, I think the ultimate goal of this concept is to identify certain teaching practices that not only /can/ be automated, but /should/ be automated — without taking away aspects of the educational transaction that we hold dear. If you believe that the main roles of technology in education include decreasing time or resource spend, we’ll continually be tempted to side with efficiency.