From Detection to Design: Academic Integrity in the AI Era
- Feb 27
- 5 min read

Most universities responded to generative AI by updating their honor codes and adding a line about ChatGPT to the syllabus. The updates sound thorough enough. But the enforcement is scattered, and the underlying problem is still there, which is that the old definition of academic integrity assumed you could separate help from cheating by drawing a line between tools and outsourcing. That line does not exist anymore.
Academic integrity used to mean you did your own work. The new version has to mean something slightly different, which is that you produced something that reflects your own thinking and learning, and you disclosed the process you used to get there. That shift sounds small, but it changes what instructors need to ask for and what students need to submit. Stanford's Academic Integrity Working Group talks about this as preparing students for an AI-enabled future, not just catching rule violations (Stanford News, 2025). The question is not whether students will use AI. The question is whether they will use it in ways that still require them to learn.
The first response from many institutions was detection. Turnitin released AI detection features, and instructors started running submissions through multiple checkers to see if the text was flagged. The problem is that detection tools are not reliable enough to base decisions on, especially when the consequences are serious. Stanford's working group found that AI detectors struggle with mixed writing, where students draft with AI and then revise, and they produce false positives that can unfairly penalize students (Stanford News, 2025). Vanderbilt's guidance goes further and explicitly warns instructors that traditional plagiarism checkers are not reliable for detecting generative AI, and that many third-party detection tools may violate student privacy protections under FERPA (Vanderbilt University, 2023). Detection-first approaches also miss the point, which is that integrity is not about policing but about designing assessments where students have to demonstrate learning regardless of what tools they use.
The policies that work in practice are not the ones that try to ban AI completely. They are the ones that make expectations explicit, shift assessment design, and require disclosure. The University of Sydney moved to a two-lane approach in 2025, where secure supervised assessments like exams prohibit AI unless explicitly allowed, and open unsupervised assessments permit AI with proper acknowledgment (University of Sydney, 2024). That model does two things. It protects high-stakes evaluation from misuse, and it teaches students how to use AI responsibly in contexts where they will encounter it after graduation. Stanford updated its honor code in 2024 to require students to disclose any assistance from generative tools, treating undisclosed AI use as academic dishonesty equivalent to cheating (Hastewire, 2025). The emphasis is on transparency, not prohibition.
Disclosure works because it puts the responsibility on the student to explain their process, and it gives instructors a clear standard to enforce. Princeton requires students to confirm AI is permitted by the instructor and disclose how and why AI was used, rather than citing it as a source, since AI output is not authored by a person (Princeton University Libraries, 2023). That distinction is important. Citing a source means you borrowed an idea from another author. Disclosing AI use means you used an algorithmic tool as part of your workflow, and the instructor needs to know that to assess your work fairly. The University of Waterloo recommends that instructors require students to include a sample statement, such as "This text was produced by the author using assistance from [insert generative AI provider]" (University of Waterloo, 2025).
The second piece is assessment redesign. If a take-home essay can be completed entirely by an AI without the student learning anything, then the assessment is not testing what it is supposed to test. Stanford's working group recommends in-person formats like oral exams and in-class writing assignments for high-stakes evaluation, where AI use is functionally limited (Stanford News, 2025). The University of Queensland's guidance suggests that instructors who want to restrict AI should use supervised in-person assessment, and for open assessments where AI is allowed, they should require students to reference it (University of Queensland, 2025). Cornell's Center for Teaching Innovation provides sample course policies that either prohibit AI, permit AI with attribution, or encourage AI use depending on the learning goals (Thesify, 2025).
The third piece, which institutions are still figuring out, is what to allow and when. A blanket ban on AI does not prepare students for workplaces where these tools are standard. A blanket permission without guidance leads to students outsourcing thinking they should be doing themselves. The middle ground is task-specific rules. Allow AI for brainstorming and research. Prohibit it for final answers on problem sets. Require it to be disclosed on essays. Forbid it on exams unless the exam is explicitly testing AI fluency. Stanford's policy treats generative AI like assistance from another person, meaning it is prohibited unless the instructor permits it, and when it is allowed, students must disclose it (Thesify, 2025). That default makes sense because it mirrors the existing rule for collaboration, which most students already understand.
Departments are starting to realize that course-level policies are not enough. Stanford's working group calls for departments to establish consistent and enforceable AI policies that can be applied fairly across sections, communicated clearly, and discussed with students throughout the year (Stanford News, 2025). When one professor in a department bans AI and another professor allows it without explanation, students get mixed signals about what integrity means in that discipline. Department-level alignment does not mean every course has the same rule. It means the rules are coherent, the reasoning is explained, and students know what to expect.
The policies that fail are the ones that do not tell students what to do. A vague statement like "use AI responsibly" does not give students enough information to comply. A rule that says "you may use AI for some tasks but not others" without specifying which tasks leads to violations that students did not know they were committing. Vanderbilt's guidance recommends that instructors answer three questions explicitly for every course, which are what constitutes academic misconduct with respect to AI, how students should disclose or cite AI use, and whether confidentiality and privacy policies apply (Vanderbilt University, 2023). If instructors do not provide a statement on AI, Vanderbilt's default is that students may use generative AI tools but must disclose all usage.
Academic integrity after AI is not about preventing students from using the tools. It is about making sure the tools do not replace the learning. The rules that seem to work are the ones that require disclosure, redesign assessments to measure actual understanding, and set expectations at the department level so students are not guessing what each instructor wants. Stanford's provost framed this clearly when she said departments are strategic campus partners in academic integrity and should lead conversations that reflect the unique contexts of their disciplines (Stanford News, 2025). Integrity is no longer just about individual honesty. It is about institutional design.



Comments