AI- or human-written language? Assumptions mislead …
Human assumptions regarding language usage can lead to flawed judgments of whether language was AI- or human-generated, Cornell Tech and Stanford researchers have found in a series of experiments.
While individuals’ proficiency at detecting AI-generated language was generally a tossup across the board, people were consistently influenced by the same verbal cues, leading to the same flawed judgments.
Participants could not differentiate AI-generated from human-generated language, erroneously assuming that mentions of personal experiences and the use of “I” pronouns indicated human authors. They also thought that convoluted phrasing was AI-generated.
“We learned something about humans and what they believe to be either human or AI language,” said Mor Naaman, professor at the Jacobs Technion-Cornell Institute at Cornell Tech and of information science at the Cornell Ann S. Bowers College of Computing and Information Science. “But we also show that AI can take advantage of that, learn from it and then produce texts that can more easily mislead people.”
Maurice Jakesch, Ph.D., a former member of Naaman’s Social Technologies Lab at Cornell Tech, is lead author of “Human Heuristics for AI-Generated Language Are Flawed,” published March 7 in Proceedings of the National Academy of Sciences. Naaman and Jeff Hancock, professor of communication at Stanford University, are co-authors.