A recent study indicates that AI chatbots may excel over the average human in creative thinking tasks, such as suggesting alternative uses for everyday objects. Nevertheless, top-performing humans still surpassed the best chatbot results in these tasks.
A recent study published in the journal Scientific Reports suggests that large language model (LLM) AI chatbots might excel beyond the average human in creative tasks, like brainstorming alternate uses for common items – a reflection of divergent thinking. However, individuals who scored the highest in these tasks still surpassed the top-performing chatbot results.
Divergent thinking is a type of thought process often linked with creativity, emphasizing the generation of many different ideas or solutions for a specific task.
It is commonly assessed with the Alternate Uses Task (AUT), in which participants are asked to come up with as many alternative uses for an everyday object as possible within a short time period. The responses are scored for four different categories: fluency, flexibility, originality, and elaboration.
Mika Koivisto and Simone Grassini compared 256 human participants’ responses with those of three AI chatbots’ (ChatGPT3, ChatGPT4, and Copy.Ai) to AUTs for four objects — a rope, a box, a pencil, and a candle. The authors assessed the originality of the responses by rating them on semantic distance (how closely related the response was to the object’s original use)