Skip to Content, Navigation, or Footer.
The Eagle
Delivering American University's news and views since 1925
Saturday, March 9, 2024
The Eagle
AI integrity code pic

Part II: AI: Artificial intelligence or academic integrity?

Professors grapple with new AI tools as they try to preserve academic integrity

Editor’s note: This story is the second part of a two-part series. Read part one here

It’s hard to have a conversation about academic integrity without bringing up artificial intelligence tools — and for good reason. With platforms like OpenAI’s ChatGPT and Google’s Bard readily available, students and professors alike wonder how, if at all, they should be using them. 

For American University professors, AI arrived in the classroom in the middle of the process of rewriting the University’s academic integrity code. In the meantime, professors have been left to determine what appropriate use of AI tools looks like in their classrooms. Some have welcomed the tool, deeming it necessary to keep up in a fast-paced world, while others have cautioned against its use, worrying that it might limit creativity and originality. 

At the beginning of the 2023-24 school year, the University’s newly formed Office of Academic Integrity offered workshops to professors on how to approach AI in their courses. Since then, professors have confronted AI in classrooms with their own policies. 

Some professors, like Travis Carlisle, an adjunct instructor of intelligence analysis in the Department of Justice, Law and Criminology, encourage careful use of AI. Carlisle said he believes it’s important for students — especially those looking for careers in intelligence work — to learn how to use AI tools to enhance, rather than replace, their skills. 

“Students are encouraged to explore and use Generative AI tools in their research and these capabilities will certainly become tools not only in the workplace, but also is of great interest to the Intelligence Community,” according to the syllabus for Carlisle’s Intelligence Analysis course. 

To encourage this learning, he offers extra credit for students who use and cite AI in their projects, adding that he believes this discourages students from using AI irresponsibly. To receive the extra credit, students must include proper MLA citations for AI responses, an annex with transcripts of how they engaged with the AI platform and a citation from a reputable source confirming the accuracy of the information provided by the AI. 

“Do not cite Generative AI by itself as a source of factual information (e.g., asking it to define a term and then quoting its definition in your paper without any other sourcing),” reads the course syllabus. “Generative AI is not always trustworthy and is not considered a credible source for use in academic writing by itself.”

Beyond pushing students to explore AI as a research tool, Carlisle also uses ChatGPT to grade assignments. He added that he tells students exactly how he uses the program and how he refines its feedback before giving the graded assignment back. 

Other professors have incorporated AI into assignments by challenging students to compare their work to the writing generated by the program. Daniella Olivares, a senior in the College of Arts and Sciences, said one of her assignments for a health policy class involved asking ChatGPT to summarize the healthcare system in a country and then reflecting on how the summary compared to her own research. 

“In my reflection of the AI versus my [research] … I wrote that structure-wise it provided an output based on bullets and it was very simplified, and it didn’t really do a deep dive into stuff,” she said. “It also had a robotic tone of voice.”

Olivares said she thought her professor may have designed the assignment to deter students from relying on ChatGPT in their own research. Even though she sees the value in comparing her work and using AI output as a starting point for a project, she said using ChatGPT on an assignment — even an assignment that required it — felt foreign. 

“It went against what I’ve been taught to do. Like you cannot be using an AI-generated thing for an assignment, so it felt very strange,” she said. “It felt weird, but I guess it was out of my comfort zone. Something very different that I didn’t ever expect to do for an assignment.”

Other students said they have started using ChatGPT in their research processes to brainstorm or find sources. Sometimes though, the AI software generated fake sources. Shane Hickey, the University’s interlibrary service coordinator, said these fake citations occasionally led students to place interlibrary loan requests for sources they could not find at Bender Library — sources that didn’t actually exist. This, Hickey said, was a problem the Library had never seen before. 

“What we started to notice, particularly last semester, spring 2023, was we were getting citations primarily for articles, and we couldn’t find records of those articles existing anywhere on the internet or in the author’s CV,” he said. “And that’s something that is not common – that’s unheard of for my kind of work.” 

Hickey said these requests tapered off over the summer when, after experimenting with ChatGPT, Hickey found it began responding to his requests for citations with instructions on how to find sources instead of generating a list of sources. 

This phenomenon was just one of several signs of AI’s entrance into classrooms. Hickey said the Library now has a working group that aims to better understand the landscape of AI and create a report with next steps for the Library. 

“How do we make sure that everybody at AU has equitable access to the variety of tools that are out there? That's something they're looking at,” he said. “Privacy is another topic. You know, a lot of times anything you enter into ChatGPT for example, is not yours, it's OpenAI’s, so educating folks on privacy.”

Meanwhile, some professors have sought to avoid complications from AI altogether by explicitly banning it. 

Glenn Moomau, a professor in the Department of Literature who teaches freshman writing courses and is heavily involved in drafting the new academic integrity code, has banned AI in his classes, saying he is concerned about how it could take away from the legitimacy of students’ work. 

“This moment in history is a great time to talk about authenticity,” he said. “What makes you an authentic person? If you depend on AI to do all of your work for you, then who are you and what are your skills and your capabilities?”

He added that he worries that students won’t develop the ability to think through essay prompts if they rely on AI in their research process. 

“It’s even a greater skill to create your own prompt, create your own idea and that’s something that you know, that takes time and process,” Moomau said. “I feel like that’s an important point to discover how your own mind works. … If you don’t cultivate that type of mindset, you’re liable to not get a good education.” 

Janine Beekman, a professor in the Department of Psychology, echoed these concerns, saying that she worries relying on AI limits creativity. 

“If we’re just using existing knowledge without really comprehending it ourselves, then how do we really innovate and come up with new ideas,” she said.

Beekman said she makes it clear to students on the first day of class that she considers using AI to write a paper a form of plagiarism which she “takes seriously and personally.”

Both professors agreed that AI might be useful in other classes — particularly in computer science and coding classes — or at a later point in time, but for now, they’ve forbidden it in their classrooms. 

“I have a hard time finding a proxy for that in a class like psychology where you do kind of have to synthesize the information in your own words and come up with your own ideas,” Beekman said. 

For now though, AI will remain at the discretion of professors. While the University continues work on its new Academic Integrity Code, it plans to include broad guidance about responsible use of new tools. Ultimately, though the responsibility falls on professors to define how AI and other emerging tools can be used in their classroom. 

Questions of responsible use of AI are likely to continue, and the University’s new Academic Integrity Code likely won’t define specifically how these tools should be used. It will include some guidance about what responsible help looks like, but, ultimately, it will be up to individual professors to set their own policies on how these tools can be used.  

This article was edited by Tyler Davis, Jordan Young and Abigail Pritchard. Copy editing done by Isabelle Kravis and Luna Jinks.

administration@theeagleonline.com


 Hosts Sara Winick and Sydney Hsu introduce themselves and talk about their favorite TV shows. This episode includes fun facts, recommendations and personal connections. 


Powered by Solutions by The State News
All Content © 2024 The Eagle, American Unversity Student Media