In February, Ella Stapleton, then a senior at Northeastern University, was reviewing lecture notes from her organizational behavior class when she noticed something odd. Was that a query to ChatGPT from her professor? Halfway through the document, which her business professor had made for a lesson on models of leadership, was an instruction to ChatGPT to “expand on all areas. Be more detailed and specific.” It was followed by a list of positive and negative leadership traits, each with a prosaic definition and a bullet-pointed example.

Stapleton texted a friend in the class.

“Did you see the notes he put on Canvas?” she wrote, referring to the university’s software platform for hosting course materials. 

“He made it with ChatGPT.”

“OMG Stop,” the classmate responded.

“What the hell?”

Stapleton decided to do some digging. She reviewed her professor’s slide presentations and discovered other telltale signs of artificial intelligence: distorted text, photos of office workers with extraneous body parts and egregious misspellings.

She was not happy. Given the school’s cost and reputation, she expected a top-tier education. This course was required for her business minor; its syllabus forbade “academically dishonest activities,” including the unauthorized use of AI or chatbots.

“He’s telling us not to use it, and then he’s using it himself,” she said.

Stapleton filed a formal complaint with Northeastern’s business school, citing the undisclosed use of AI as well as other issues she had with his teaching style, and requested reimbursement of tuition for that class. As a quarter of the total bill for the semester, that would be more than $8,000.

When ChatGPT was released at the end of 2022, it caused a panic at all levels of education because it made cheating incredibly easy. Students who were asked to write a history paper or literary analysis could have the tool do it in mere seconds. Some schools banned it while others deployed AI detection services, despite concerns about their accuracy.

But, oh, how the tables have turned. Now students are complaining on sites like Rate My Professors about their instructors’ overreliance on AI and scrutinizing course materials for words ChatGPT tends to overuse, such as “crucial” and “delve.”

In addition to calling out hypocrisy, they make a financial argument: They are paying, often quite a lot, to be taught by humans, not an algorithm that they, too, could consult for free.

For their part, professors said they used AI chatbots as a tool to provide a better education. Instructors interviewed by The New York Times said chatbots saved time, helped them with overwhelming workloads and served as automated teaching assistants.

Their numbers are growing. In a national survey of more than 1,800 highereducation instructors last year, 18% described themselves as frequent users of generative AI tools; in a repeat survey this year, that percentage nearly doubled, according to Tyton Partners, the consulting group that conducted the research. The AI industry wants to help, and to profit: The startups OpenAI and Anthropic recently created enterprise versions of their chatbots designed for universities.

(The Times has sued OpenAI for copyright infringement for use of news content without permission.)

Generative AI is clearly here to stay, but universities are struggling to keep up with the changing norms. Now professors are the ones on the learning curve and, like Stapleton’s teacher, muddling their way through the technology’s pitfalls and their students’ disdain.

Making the grade

Last fall, Marie, 22, wrote a three-page essay for an online anthropology course at Southern New Hampshire University. She looked for her grade on the school’s online platform, and was happy to have received an A. But in a section for comments, her professor had accidentally posted a backand-forth with ChatGPT. It included the grading rubric the professor had asked the chatbot to use and a request for some “really nice feedback” to give Marie.

“From my perspective, the professor didn’t even read anything that I wrote,” said Marie, who asked to use her middle name and requested that her professor’s identity not be disclosed. She could understand the temptation to use AI. Working at the school was a “third job” for many of her instructors, who might have hundreds of students, said Marie, and she did not want to embarrass her teacher. 

Still, Marie felt wronged and confronted her professor during a Zoom meeting. The professor told Marie that she did read her students’ essays but used ChatGPT as a guide, which the school permitted.

Robert MacAuslan, vice president of AI at Southern New Hampshire, said that the school believed “in the power of AI to transform education” and that there were guidelines for both faculty and students to “ensure that this technology enhances, rather than replaces, human creativity and oversight.” A do’s and don’ts for faculty forbids using tools, such as ChatGPT and Grammarly, “in place of authentic, human-centric feedback.”

“These tools should never be used to ‘do the work’ for them,” MacAuslan said. “Rather, they can be looked at as enhancements to their already established processes.”

After a second professor appeared to use ChatGPT to give her feedback, Marie transferred to another university.

Paul Shovlin, an English professor at Ohio University in Athens, Ohio, said he could understand her frustration. “Not a big fan of that,” Shovlin said, after being told of Marie’s experience. Shovlin is also an AI faculty fellow, whose role includes developing the right ways to incorporate AI into teaching and learning.

“The value that we add as instructors is the feedback that we’re able to give students,” he said. “It’s the human connections that we forge with students as human beings who are reading their words and who are being impacted by them.”

Shovlin is a proponent of incorporating AI into teaching, but not simply to make an instructor’s life easier.

Students need to learn to use the technology responsibly and “develop an ethical compass with AI,” he said, because they will almost certainly use it in the workplace. Failure to do so properly could have consequences.

“If you screw up, you’re going to be fired,” Shovlin said.

One example he uses in his own classes: In 2023, officials at Vanderbilt University’s education school responded to a mass shooting at another university by sending an email to students calling for community cohesion.

The message, which described promoting a “culture of care” by “building strong relationships with one another,” included a sentence at the end that revealed that ChatGPT had been used to write it. After students criticized the outsourcing of empathy to a machine, the officials involved temporarily stepped down.

Not all situations are so clear cut. Shovlin said it was tricky to come up with rules because reasonable AI use may vary depending on the subject. The Center for Teaching, Learning and Assessment, where he is a fellow, instead has “principles” for AI integration, one of which eschews a “one-sizefits-all approach.”

The Times contacted dozens of professors whose students had mentioned their AI use in online reviews. The professors said they had used ChatGPT to create computer science programming assignments and quizzes on required reading, even as students complained that the results didn’t always make sense. They used it to organize their feedback to students, or to make it kinder. As experts in their fields, they said, they can recognize when it hallucinates, or gets facts wrong.

There was no consensus among them as to what was acceptable. Some acknowledged using ChatGPT to help grade students’ work; others decried the practice. Some emphasized the importance of transparency with students when deploying generative AI, while others said they didn’t disclose its use because of students’ skepticism about the technology.

Most, however, felt that Stapleton’s experience at Northeastern — in which her professor appeared to use AI to generate class notes and slides — was perfectly fine. That was Shovlin’s view, as long as the professor edited what ChatGPT spat out to reflect his expertise. Shovlin compared it with a long-standing practice in academia of using content, such as lesson plans and case studies, from third-party publishers.

To say a professor is “some kind of monster” for using AI to generate slides “is, to me, ridiculous,” he said.

A teachable moment

After filing her complaint at Northeastern, Stapleton had a series of meetings with officials in the business school. In May, the day after her graduation ceremony, the officials told her that she was not getting her tuition money back.

Rick Arrowood, her professor, was contrite about the episode. Arrowood, who is an adjunct professor and has been teaching for nearly two decades, said he had uploaded his class files and documents to Chat- GPT, the AI search engine Perplexity and an AI presentation generator called Gamma to “give them a fresh look.” At a glance, he said, the notes and presentations they had generated looked great.

“In hindsight, I wish I would have looked at it more closely,” he said.

He put the materials online for students to review, but emphasized that he did not use them in the classroom, because he prefers classes to be discussionoriented. He realized the materials were flawed only when school officials questioned him about them.

The embarrassing situation made him realize, he said, that professors should approach AI with more caution and disclose to students when and how it is used. Northeastern issued a formal AI policy only recently; it requires attribution when AI systems are used and review of the output for “accuracy and appropriateness.”

A Northeastern spokesperson said the school “embraces the use of artificial intelligence to enhance all aspects of its teaching, research and operations.”

“I’m all about teaching,” Arrowood said. “If my experience can be something people can learn from, then, OK, that’s my happy spot.”