As we get ready for another academic year, we once again find ourselves faced with new challenges. This fall, many of us are revisiting the syllabus policies for generative Artificial Intelligence (AI) that we may have slapped together last year, and are in earnest trying to create a reasoned and workable approach.
It may feel overwhelming to craft a thoughtful policy for something that is still so new and unknown. First, I would offer that we give ourselves some grace, knowing that AI syllabus policies can and should adjust and change along with this rapidly evolving technology and our, more slowly to evolve, understanding about it. As I’ve learned from my colleagues Christine Harrington and Melissa Thomas, who have literally written the book about the course syllabus, the syllabus is not a contract. It’s not a binding agreement between the faculty and students, but rather a communication, planning, and motivational guide that, when used appropriately, can be a tool for learning. I like to keep this in mind as I develop my AI policies.
I have a bit of an advantage in that most of my course content is about learning theory and teaching strategies. In a course about learning, it is natural for me to be figuring out the appropriate level of AI use in front of and with my students. But I also want to make sure I have done my due diligence and create the best plan I can.
As I go through my own process, I’ll offer a few ideas that have been inspired from the work of Harrington and Thomas regarding the syllabus.
Consider what AI does to learning in your course
Our syllabus policies may feel like a rote list of “rules,” but all policies are, at their core, intended to make the learning experience better for everyone. One of the concerns about generative AI is that we don’t have years of practical knowledge and resulting expertise about how to use it, so we do not yet know its true impact on learning, making it even more difficult to write policies and guidelines for our specific courses.
The good and bad news is that it may differ in each course. For all aspects of our courses, we need to consider what exactly is AI use? When does it help learning and when does it hinder learning? If a student uses AI to help brainstorm a paper topic, or to check references, is that helpful or harmful? Do we include google searches or Grammarly as AI? Is it more important to refrain from AI in the brainstorming stages, or in the refinement stages of a project? What if students use AI to summarize class notes, or to quiz themselves on content? We may need to reconsider nearly every aspect of our course and determine as best we can the impact on learning.
One helpful tool that I’ll be using this year is the AI Assessment Scale (AIAS), developed by Mike Perkins, Leon Furze, Jasper Roe, and Jason MacVaugh. For each of my course assignments, along with the goals, details, and rubrics, I am now also including an AI use “level”.

For example, I am fully comfortable saying that the weekly personal reflections in my course need to be completed at level 1: “NO AI. You must not use AI at any point during the assessment.” These are personal reflections marked complete/incomplete. I do not grade them for grammar nor do I expect students to look up concepts or summarize ideas. Rather, I want these to be raw responses to activities, connections to their personal lives, or questions they still have. So I will need to mark them as such. This semester, I may dedicate occasional in-class time to work on these, to reinforce that they are personal reflections tied to the week’s content and activities.
On the other hand, for some of my larger assignments, I am reluctantly saying I will accept a level of AI use. For the two course papers due, I have merged levels 2 and 3 of the AIAS and adjusted and elaborated the language:
AI use: 2. AI Planning – 3. AI Refinement
2. AI Planning: AI may be used for pre-task activities such as brainstorming, outlining and initial research. You will be expected to work on these ideas in class so make sure you are able to talk about them fluently, and that you have double checked for any errors that can occur when using Al.
3. AI Refinement: AI may also be used to help complete or polish the task, including getting feedback and refinement. Make sure you feel confident that what you turn in is a strong representation of your thinking and your own work so that I can provide feedback to help your learning. These assignments exist solely for your learning benefit.
Your final submission will need to document how you have used AI to either develop and refine ideas or refine and evaluate your work as well as honest analysis of the impact on your learning process.
Because we will be heavily using internal and external rubrics (AAC&U Value Rubrics and our Social and Behavioral Science Requirements) for planning and for peer reviews of these assignments, I expect to see versions of drafts over time and we will talk about them during in-class workshopping time, for both brainstorming and peer review. I truly believe it would be hard for a student to let AI do all of the thinking within this process. I feel that the students are going to be engaging in their product heavily and hope to use the process to help them be proud of their work and its evolution. As I thought through the steps and process of completing these written assignments, I realized that I would be accepting of some use of AI along the way but also want to take advantage of any use to continually analyze the learning process. I am aware that this level of disclosure may not work for all courses.
Guidance for AI Syllabus Policies
Many institutions are requiring faculty to create AI policies. In a prior blog post, we talked about how to make the most of policy statements in your syllabus. Many of the same lessons remain relevant to AI syllabus policies, with a new lens and perhaps a new urgency.
Highlight the policy
Simply adding yet another policy to your syllabus, especially if it is within a long list of policies, may not make much of an impact. If this is something you truly want students to be aware of, and internalize, you will need to do more than list it. Talk about it in class. Ask students to read it and pose questions. Before the first big assignment, reserve some time in class or online to go over the policy and reduce any confusion. And, consider revising the policy if students offer alternatives that still meet the learning goals.
Explain the meaning in your course
There is no doubt that students are currently receiving contradictory policies about their use of AI. In some courses it is required, in some it is banned, and others ask students to note exactly how they use AI. Students may be receiving messages from their friends or family that run the gamut as well. We can hardly blame students for the lack of clarity on when and how AI use is permissible. Framing your intent within the context of your course can move this from rote language to meaningful guidelines.
Use specific examples and if you have time, walk through an assignment or activity using AI in class and talk about what is gained or lost in this process, and what is acceptable. Adding proactive explanations can take the stigma away from a student having to ask, especially if we are giving off the message that use is taboo or should be obvious.
Help students internalize the “why” behind the policy
It seems fruitless to attempt to catch inappropriate AI use, and many AI detectors are deemed ineffective. Writing a policy that only serves to deter and ‘catch cheaters’ further sets up an adversarial position between you and the students, something we are usually trying to break down through the creation of a motivational syllabus. Instead, try to write the policy in a way that justifies why certain types of use is either allowed or frowned upon. Help students understand why your policy is in place and how it is meant to benefit them.
Just telling students what to do or not to do does not always help them follow the guidelines appropriately. Perhaps AI provides a great opportunity to double down on the idea of self-directed learning. In my course, it is a perfect fit to ask students self-reflective questions about their use of AI and how they feel it is impacting their learning. I will have some questions built into assignments that asks them to analyze and articulate the impact. At times, I will have them complete something with AI and without AI to self-assess the difference. I anticipate many discussions about this both before and after assignments, and I hope that my honest curiosity about it will lead to open sharing and collaborative learning.
Invite and guide rather than shame
As we know from our Designing a Motivational Syllabus course, the tone and language of the syllabus can impact students’ impressions of the entire course and their overall motivation. Faculty have been told to be harsh in the syllabus and then be flexible later. But, just as we often tell students that “no late work is accepted” and then we give exceptions when students have a good reason, we should make sure that our stated AI policy actually matches what we will do during the course and doesn’t just exist to strike fear by covering all potential misuse.
I’ve spoken with college students who are now questioning their writing constantly and are terrified of being accused of using AI. One recently said, only partially joking, “I don’t even want to use Google anymore because I’m afraid I could get kicked out of college!” Any policies we create should aim to help students with the learning process, help them see what is acceptable or not acceptable, rather than make them further afraid. I hope we can write policies in such a way that invite students to better understand AI and its impact on learning with us.
In my case, I am very comfortable saying… “This approach is based on my current understanding about how AI could impact your learning. We will talk about it throughout the course, so these policies may change over time as our understanding evolves. But please know that my policy is created and enforced to make the best learning experience possible for you.”
Disclose your own use of AI
Although we seem eager to ask students to disclose their use of AI, many faculty are reluctant to share when we ourselves have used AI to help brainstorm an assignment, create content, or even give feedback on a paper. I’ve used it myself for creating rubrics or assignment ideas, often when I am either stuck or in a pinch. But I am often using it in areas where I already have decades of knowledge so I am able to see what is incorrect and use it appropriately. This is how I currently rationalize and will explain my own use of AI to students, in the hopes that they can see some of the benefits and limitations, and also hold me accountable. It’s common for individuals to devalue work that they presume to be done with AI. We don’t want students to devalue our teaching work, but in this tricky time where we are figuring this out, being transparent can help us normalize AI use when it is helpful, and also reveal the limitations.
Using Policies to Motivate Students
As you create and revise your AI policy, don’t forget that the syllabus is a powerful tool for setting the tone of your course. It is not only the first introduction your students may have to you and your class, but a resource that can reduce questions and clarify expectations for students as well as provide a snapshot of academic expectations. With a little thought, our AI policies can be used to invite, inspire and motivate students to engage in learning.
Helpful Resources
Below are some resources that I found helpful in crafting my own policies.
The Artificial Intelligence Disclosure (AID) Framework, by Kari D. Weaver
The Best AI Syllabus Policies I’ve Seen So Far, by Daniel Stanford
Creating your course policy on AI, Stanford Teaching Commons
Rework Your Syllabus
Interested in revising your syllabus into a motivational learning tool? Designing a Motivational Syllabus walks participants through a step by step process to transform their syllabus from a list of rules into a resource for learning.











