The Business & Technology Network
Helping Business Interpret and Use Technology
S M T W T F S
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
 
 
 
 
 
 
16
 
17
 
18
 
19
 
20
 
21
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 

AI models outperform average human creativity in new study

Tags: new
DATE POSTED:February 10, 2026
AI models outperform average human creativity in new study

A new study conducted by the Université de Montréal has revealed that generative artificial intelligence (AI) systems can surpass average human creativity in specific tests. This research, published in Scientific Reports, involved a comparison of over 100,000 humans with advanced AI models.

Models like GPT-4 demonstrated strong performance in tasks designed to measure original thinking and idea generation. However, the study also found that the most creative humans, particularly the top 10%, still outperformed AI, especially in areas such as poetry and storytelling.

Professor Karim Jerbi from the Université de Montréal’s Department of Psychology, with contributions from AI researcher Yoshua Bengio, led the study. It represents the largest direct comparison between human and large language model creativity to date.

Several leading large language models, including ChatGPT, Claude, and Gemini, were evaluated. These AI systems’ performance was compared to the results of more than 100,000 human participants. Some AI systems, including GPT-4, exceeded average human scores on tasks designed to measure divergent linguistic creativity.

“Our study shows that some AI systems based on large language models can now outperform average human creativity on well-defined tasks,” stated Professor Karim Jerbi. He added that “even the best AI systems still fall short of the levels reached by the most creative humans.”

Analysis by co-first authors Antoine Bellemare-Pépin (Université de Montréal) and François Lespinasse (Université Concordia) indicated that while some AI models outperform the average person, peak creativity remains human. When analyzing the most creative half of participants, their average scores exceeded all tested AI models. This gap increased significantly for the top 10% of creative individuals.

To evaluate creativity, researchers used the Divergent Association Task (DAT), a psychological test developed by study co-author Jay Olson. The DAT requires participants to list ten words as unrelated in meaning as possible. An example of a highly creative response included “galaxy, fork, freedom, algae, harmonica, quantum, nostalgia, velvet, hurricane, photosynthesis.” This task is linked to other creativity tests in writing and idea generation.

The researchers also explored whether AI’s success on the DAT extended to more complex tasks, such as composing haikus, writing movie plot summaries, and producing short stories. The results mirrored the DAT: AI systems sometimes exceeded average human performance, but skilled human creators consistently produced more original work.

The study found that AI creativity can be adjusted by altering technical settings, specifically the model’s “temperature.” Lower temperature settings produce more conventional outputs, while higher temperatures lead to more varied and exploratory responses. Creativity was also influenced by instructional prompts; for instance, prompts encouraging etymological thinking resulted in higher creativity scores.

“Even though AI can now reach human-level creativity on certain tests, we need to move beyond this misleading sense of competition,” said Professor Karim Jerbi. He suggested that generative AI is a powerful tool to enhance human creativity rather than replace it. The findings imply a future where AI assists human imagination.

Featured image credit

Tags: new