The findings make sense, given that people who are already creative don’t really need to use AI to be creative, says Tuhin Chakrabarty, a computer science researcher at Columbia University, who specializes in AI and creativity but wasn’t involved in the study.
There are some potential drawbacks to taking advantage of the model’s help, too. AI-generated stories across the board are similar in terms of semantics and content, Chakrabarty says, and AI-generated writing is full of telltale giveaways, such as very long, exposition-heavy sentences that contain lots of stereotypes.
“These kinds of idiosyncrasies probably also reduce the overall creativity,” he says. “Good writing is all about showing, not telling. AI is always telling.”
Because stories generated by AI models can only draw from the data that those models have been trained on, those produced in the study were less distinctive than the ideas the human participants came up with entirely on their own. If the publishing industry were to embrace generative AI, the books we read could become more homogenous, because they would all be produced by models trained on the same corpus.
This is why it’s essential to study what AI models can and, crucially, can’t do well as we grapple with what the rapidly evolving technology means for society and the economy, says Oliver Hauser, a professor at the University of Exeter Business School, another coauthor of the study. “Just because technology can be transformative, it doesn’t mean it will be,” he says.