AI invades Princeton, where 30% of students cheat—but peers won't snitch
Mirrored from Ars Technica — AI for archival readability. Support the source by reading on the original site.
Pity poor Princeton.
The ultra-elite university has a mere $38 billion in endowment money. Many of its dorms lack air conditioning. And it’s in New Jersey.
I kid about New Jersey, of course. Despite not being allowed to pump one’s own gas there, the “Garden State” grew on me during three years spent in the Princeton area. I still keep up with its goings-on, which led me to this week’s article in the Daily Princetonian on how AI was disrupting the university’s long-running traditions.
Though a beautiful place, Princeton is also extremely competitive; before one heads up to New York to become a captain of finance, one needs to succeed in the classroom. And when everyone else in the classroom is a genius, cheating becomes a real option to stay ahead, especially in the sciences.
In a 2025 survey of Princeton seniors, 29.9 percent of students admitted to cheating on at least one assignment or exam. (This skews differently by degree. Students seeking a bachelor of science in engineering [BSE] degree admitted to cheating 40.8 percent of the time, compared to “only” 26.4 percent of bachelor of arts [BA] students.)
And according to the data, most of this cheating is done with generative AI.
Cheating is easier at Princeton than at some places because the school does not allow its professors to proctor exams. Thanks to an honor code pact going back to 1893, Princeton profs do not watch their students take in-class tests. Students, for their part, must write, “I pledge my honor that I have not violated the Honor Code during this examination” at the start of each test. Students are also honor-bound to report other students that they see cheating.
But thanks to cell phones, AI, and a culture not willing to “snitch” on others, the old system is under significant strain. Cheating is now widespread and is not being reported—even though it bothers many students. As a January opinion piece about the school’s honor code put it:
According to students I’ve spoken with, cheating on in-person exams comes as no surprise in some engineering and economics classes. One student told me that in one Economics exam, there was a line out the door to use the men’s bathroom—suggesting that cheating was ubiquitous.
But because students don’t want to report this behavior, many “turn a blind eye to cheating, or deliberately avoid sitting near the back row of a lecture hall to avoid catching their peers in the act,” the piece added. The 2025 senior survey found that 44.6 percent of all seniors had witnessed cheating—and chosen not to report it.
These dynamics have led both students and faculty to ask Princeton to bring professors back as exam proctors. According to a document circulated by the school, there is now a “perception that cheating on in-class exams has become widespread.”
Why? According to administrators:
Commonly cited [reasons] are the advent of generative artificial intelligence products which significantly lower the barrier to gaining unfair advantage in the context of an in-class examination. The ease of access of these tools on a small personal device have also changed the external appearance of misconduct during an examination, which is much harder for other students to observe (and hence to report). Many reports that do arrive to the Honor Committee are now anonymous because of another technological development of longer standing—social media—which has reportedly deterred students from reporting openly out of apprehension of doxxing or shaming among their peer groups. This has made it difficult for the Honor Committee and the Office of the Dean of Undergraduate Students (ODUS) to follow up on concerns, even when there is significant buzz or outrage about supposedly egregious violations.
This week, Princeton faculty voted to require instructor proctoring of all in-class exams beginning on July 1. Only a single faculty member objected.
Even after July 1, however, professors will not interfere directly with attempts to cheat. Instead, they will observe and take notes, serving as “an additional witness in the room” who can testify in cases later brought before the Honor Court.
AI has quickly upended education, pushing many teachers to back off on written assignments and take-home tests in favor of in-class or even oral exams. As Princeton’s example shows, though, not even this is enough; plenty of students, given the chance, will just as happily use AI to cheat while in a classroom surrounded by their peers if they can get away with it.
Such widespread outsourcing of thought and memory is deeply depressing to many educators. This includes our own Scott Johnson, who recently penned a piece for Ars about what it feels like to grade so many responses generated by machines rather than by humans. (Hint: It does not feel good.)
It’s not like students think they are actually learning when they do this; they’re too smart for that, especially at Princeton. But when the pressure to succeed remains high, and the cost/difficulty of AI tools remains low, many students are tempted to take a shortcut, even one that ultimately harms them. As Scott concluded:
I haven’t encountered any students who think they’re learning when they let LLMs do their work, despite the face that college administrators and LLM advertising try to put on this. It’s just workload management to them.
Who knows what will happen if the AI bubble pops and the frictionless and ubiquitous access to LLMs withers into something much more limited. But while AI is here, it certainly isn’t revolutionizing education and enhancing learning. It’s just making it extraordinarily difficult to do all the things that have been helping students learn for a very long time.
That’s not how the AI companies see things, of course. When I read this week’s Daily Princetonian article on the proctoring change, I couldn’t help but notice the giant banner across the top of the piece. “PRACTICED TO PREPARED,” it said. It was an ad for Google Gemini.
More from Ars Technica — AI
-
Altman forced to confront claims at OpenAI trial that he's a prolific liar
May 13
-
Anthropic blames dystopian sci-fi for training AI models to act “evil”
May 13
-
Rivian adds a new onboard AI assistant to its latest software update
May 13
-
The newest AI boom pitch: Host a mini data center at your home
May 12
Discussion (0)
Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.
Sign in →No comments yet. Sign in and be the first to say something.