The Department of Political Studies puts a policy of its own in place against the increasing use of AI.
An e-mail sent to all students in the Department of Political Studies on Jan. 14 outlined their new policy on the use of Artificial Intelligence (AI), which elaborates on existing guidelines set by the University on AI and academic integrity. The policy doesn’t differ from the University wide policy used by many other departments, but rather elaborates on it. The policy aims to inform students about what AI is and the consequences for a student caught using AI tools, such as ChatGPT, or engaging in other forms of academic misconduct from the perspective of the department.
The policy outlines seven key points. The first three clarify it builds on a University-wide policy, specifying students may not use AI or delegate tasks unless explicitly permitted. The fourth point warns that students suspected of cheating will face a formal investigation by the course instructor or supervisor, followed by a report filed by the Faculty of Arts and Science if the investigation determines a breach from academic integrity.
The fifth point notes Large Language Models (LLMs) like ChatGPT can be unreliable for tasks and should not be used for research. The last point urges students to read the policy in full, emphasizing that ignorance will not be accepted as a defense for using AI tools.
Stephen Larin, an assistant professor in the Department of Political Studies who has taught and researched AI in a political context, wrote the draft of the policy. Larin said in a statement to The Journal he began the draft after the faculty in the department discussed their experiences with AI-related academic integrity violations at last year’s department annual planning meeting.
According to Larin, the policy was deliberated over several meetings and adopted during their January department meeting.
“The primary purpose of the policy is to improve student ‘literacy’ regarding academic integrity, AI, and how they intersect, as well as how using [LLMs] undermines the purpose of many aspects of political science education,” Larin said. “I have put significant effort into improving these types of literacy in all of my courses this year, and it has helped to decrease academic integrity violations.”
The policy highlights the issues with AI, including the lack of capacity for understanding truth and falsity of text, making it possible for it to provide false information. He also mentioned the significant environmental impacts of AI. Larin hopes that highlighting these drawbacks to AI will deter students from using it and decrease the level of academic integrity violations.
“There has been a dramatic increase in academic integrity violations over the past two years, not just in our own Department or University, but at all higher education institutions around the world,” Larin explained.
In his own experience, Larin said he caught over 22 per cent of his class using AI on one or more assignments in the last year with the second-year courses he instructs, causing him to work extra hours to address the departures from academic integrity.
“The majority of the students in the class didn’t cheat, but the scale of the problem and the disrespect and foolishness that it demonstrated genuinely disgusted me,” Larin said.
When asked how the policy will remain relevant as AI evolves, Larin emphasizes it’s never appropriate for students to delegate work they’re supposed to do on their own, highlighting “unauthorized delegation” as the core concept of the policy and not any specific technology or product.
Because the policy prohibits unauthorized delegation, if a instructor permits the delegation of some tasks to AI or group work, for example, that delegation is not a violation of academic integrity. Using AI to check spelling is very different from using it to rewrite assignments, Larin said.
Owen Massey, ArtSci ’27, expressed hope that the new policy will offer clearer guidance to students uncertain about the university-wide AI policy and outline the potential consequences for those who use AI.
“Hopefully, this [policy] will dissuade students from utilizing AI language models in their work and encourage them to seek more reliable and proper guidance in their academic careers,” Massey said in a statement to The Journal.
Tags
All final editorial decisions are made by the Editor(s) in Chief and/or the Managing Editor. Authors should not be ed, targeted, or harassed under any circumstances. If you have any grievances with this article, please direct your comments to [email protected].