One of the main concerns about generative AI is “cheating,” or students getting credit for work they didn’t do. This is actually a problem that collaborative learning has been grappling with for decades. In fact, if you think of generative AI as a collaborator in a group project, there’s actually quite a lot of existing practice and literature we can tap into for guidance about using generative AI effectively in the service of learning – both in how students learn and how instructors assess.
Think about the kinds of questions you might consider as you design a group project:
- What are the individual tasks that need to be accomplished?
- Who will do each task?
- How will they be integrated together?
- How will collaborators hold each other accountable?
- How will instructors know who did what?
- How will grades be awarded to individuals?
- &c.
Some of these questions might be answered by the instructor and included in the assignment instructions; some of the questions may be left to the project participants to decide for themselves. But think about those questions from the perspective of a student using generative AI to complete an assignment. If you conceptualize generative AI as a collaborator and not merely a tool, and you take the view that “all work is group work now,” the tools and techniques that have been developed over the years to make group work successful suddenly feel much more broadly relevant.
There are thousands of websites with guidance about how to engage students in collaborative learning. One that came up near the top of my Google search is the Cornell University Center for Teaching and Learning’s site on Collaborative Learning. Reading through their collaborative learning guidelines while imagining that one of the collaborators could be generative AI is an enlightening exercise. One section of the Cornell website with the heading Considerations for Using Collaborative Learning includes:
- Introduce group or peer work early in the semester to set clear student expectations.
- Establish ground rules for participation and contributions.
- Plan for each stage of group work.
- Carefully explain to your students how groups or peer discussion will operate and how students will be graded.
- Help students develop the skills they need to succeed, such as using team-building exercises or introducing self-reflection techniques.
- Consider using written contracts.
- Incorporate self-assessment and peer assessment for group members to evaluate their own and others’ contributions.
These are all things one would be wise to consider when asking students to work with ChatGPT. Here’s a minor rewording that makes things explicit:
- Introduce work with generative AI early in the semester to set clear student expectations.
- Establish ground rules for participation and contributions.
- Plan for each stage of collaborating with generative AI.
- Carefully explain to your students how collaborating with generative AI will operate and how students will be graded.
- Help students develop the skills they need to succeed, such as using zero-shot, one-shot, and few-shot prompting techniques, as well as self-reflection
- techniques.
- Consider using written contracts.
- Incorporate self-assessment and peer assessment for learners to evaluate their own and generative AI’s contributions.
Another section of the Cornell site with the heading Getting Started with Collaborative Learning focused on short, in-class activities includes additional guidance which maps easily into collaborating with generative AI:
- Introduce the task. This can be as simple as instructing students to turn to their neighbor (or open their generative AI) to discuss or debate a topic.
- Provide students with enough time to engage with the task. Walk around and address any questions as needed.
- Debrief. Call on a few students to share a summary of their conclusions. Address any misconceptions or clarify any confusing points. Open the floor for questions.
Once you begin thinking of generative AI as a legitimate collaborator rather than simply a tool, you begin to see how popular collaborative learning activities like think-pair-share and peer editing could be adapted to work with generative AI in a relatively straightforward manner.
The Eberly Center at Carnegie Mellon University offers resources about Assessing Group Work. Read some of their recommendations below from the perspective of the “group” being a student working with generative AI.
Assess process, not just product.
If both product and process are important to you, both should be reflected in students’ grades – although the weight you accord each will depend on your learning objectives for the course and for the assignment. Ideally, your grading criteria should be communicated to students in a rubric. This is especially important if you are emphasizing skills that students are not used to being evaluated on, such as the ability to cooperate or meet deadlines.
Ask students to assess their own contribution to the team.
Have students evaluate their own teamwork skills and their contributions to the group’s process using a self-assessment of the process skills you are emphasizing. These process skills may include, among others, respectfully listening to and considering opposing views or a minority opinion, effectively managing conflict around differences in ideas or approaches, keeping the group on track both during and between meetings, promptness in meeting deadlines, and appropriate distribution of research, analysis, and writing.
Hold individuals accountable.
To motivate individual students and discourage the free-rider phenomenon, it is important to assess individual contributions and understanding as well as group products and processes. In addition to evaluating the work of the group as a whole, ask individual students to demonstrate their learning. This can be accomplished through independent write-ups, weekly journal entries, content quizzes, or other types of individual assignments.
There are definitely lessons for us to learn here as we thinking about how to assess students using generative AI to complete their assignments.
Here’s a second example, just to make the point about the power of this change of frame. Vygotsky defined the “zone of proximal development” as “the distance between the actual developmental level as determined by independent problem solving and the level of potential development as determined through problem solving under adult guidance, or in collaboration with more capable peers” (Vygotsky, 1978, p. 86). In other words, the zone of proximal development includes the set of problems too hard for a learner to solve on their own, but that are solvable with assistance from what has also been called a “more knowledgeable other.” When you think of generative AI as a “more knowledgeable other,” this unlocks a way for you to bring some of Vygotsky’s work of sociocultural learning to bear as you think about using generative AI to support learning.
This won’t always work, of course. But you may find it interesting to think about your favorite model of social learning, see where it might make sense to include generative AI as a collaborator or an “other” in that model, and then tease out which lessons already learned in that space can be applied to using generative AI to support learning.
If, for whatever reason, you prefer to continue thinking of generative AI as a technology rather than a collaborator, you might explore connections to Jonassen’s idea of “cognitive tools” or explore connections with “computer-supported collaborative learning” to look for research, theory, and practice to help ground your use of generative AI in support of learning. I will admit that feels a little more incremental to me than the larger reconceptualization of generative AI as a collaborator, but good things come from incremental approaches, too.