3 Sources
3 Sources
[1]
Scientists discover AI can make humans more creative
Artificial intelligence (AI) is commonly viewed as a technology designed to automate work and potentially replace human labor. However, new research from Swansea University offers a different perspective. The findings suggest that AI can also function as a creative collaborator that encourages exploration, engagement, and inspiration. Researchers from the University's Computer Science Department carried out one of the largest studies so far examining how people work alongside AI during creative design tasks. More than 800 participants joined an online experiment where they used an AI-supported system to design virtual cars. How AI Generated Diverse Design Ideas Rather than quietly optimizing designs behind the scenes, the system used a method called MAP-Elites to produce visual galleries filled with many different design possibilities. These galleries showed a wide spectrum of car concepts, including highly effective designs, unusual ideas, and even some intentionally flawed options. Turing Fellow Dr. Sean Walton, Associate Professor of Computer Science and the study's lead author, explained: "People often think of AI as something that speeds up tasks or improves efficiency, but our findings suggest something far more interesting. When people were shown AI-generated design suggestions, they spent more time on the task, produced better designs and felt more involved. It was not just about efficiency. It was about creativity and collaboration." Why Traditional AI Evaluation May Be Too Limited The study, published in the ACM journal Transactions on Interactive Intelligent Systems, also highlights a problem with how AI design tools are typically assessed. Standard metrics often focus on simple behaviors, such as how frequently users click on or copy AI suggestions. According to the researchers, these measures overlook important aspects of the experience, including how the technology influences people's thoughts, emotions, and willingness to explore new ideas. The Swansea researchers argue that AI systems should be evaluated using broader methods that capture these deeper effects. Understanding how AI shapes human thinking and engagement could provide a more complete picture of its impact. Why Imperfect Ideas Can Boost Creativity Dr. Walton emphasized that variety in AI-generated output played a crucial role in the experiment. "Our study highlights the importance of diversity in AI output. Participants responded most positively to galleries that included a wide variety of ideas, including bad ones! These helped them move beyond their initial assumptions and explore a broader design space. This structured diversity prevented early fixation and encouraged creative risk-taking. "As AI becomes increasingly embedded in creative fields, from engineering and architecture to music and game design, understanding how humans and intelligent systems work together is essential. As the technology evolves, the question is not only what AI can do but how it can help us think, create and collaborate more effectively."
[2]
AI boosts brainstorming but may slow the creative process
Generative AI has emerged as a powerful catalyst for creative brainstorming, yet it can slow experienced designers once the work turns toward finishing a piece. The finding reframes AI not simply as a productivity tool but as a collaborator whose benefits and drawbacks depend on where a creator stands in the creative process. Inside poster design tasks that moved from sketching ideas to producing final artwork, the tension between inspiration and execution became visible. Jinghui Hou, an assistant professor at the University of Houston (UH), linked the delay to expert habits. Less experienced designers could take AI output and move on, while veterans often stopped to revise, edit, and rebuild it. That extra cleanup matters because creative work rarely ends with a first spark, and the paper focused on what happens afterward. The researchers divided creative work into ideation, the stage of generating many possibilities before committing to one path. After that comes the harder task of choosing one option, building it out, and making it fit the brief. The team found that AI raised early-stage scores by 76% in novelty, 24% in relevance, and 97% in complexity. Those gains make sense because abundance helps when people are still searching, but abundance can become noise during finishing. Years of training can harden into expertise fixation, which keeps experts reaching for familiar routines. When AI produced images with its own logic, professionals often had to translate that output back into their practiced methods. Screen recordings showed heavier revision work, with expert designers adding elements and editing existing ones more often before settling. In the student experiment, experts using AI in implementation spent 57% more time and still reached similar creativity scores. People without deep design training kept gaining help because AI handled parts of production they had not already mastered. Instead of defending a personal routine, they could accept suggestions, borrow structure, and keep moving toward a workable result. Among lower-expertise students, implementation improved novelty, relevance, and complexity when AI arrived only during that later stage. That pattern suggests AI can lower barriers for beginners even while it frustrates people who already work from strong habits. The evidence came from two experiments: 192 students completed a lab poster task, and 120 professionals tackled a real advertising brief. One test kept conditions tight enough to separate idea generation from execution, while the other moved into real professional work. The field study also used Midjourney V6.1, a text-to-image generative AI system that creates detailed images from written prompts, allowing the authors to test whether newer models changed the basic pattern. Professional designers still slowed during implementation, spending about 14.6 extra minutes when AI entered only at that point. AI clearly expanded the number of ideas people tried, yet it did not trap them in endless indecision. Most participants still carried roughly one option into the final stage, even after exploring several machine-made possibilities first. Professionals also reported more mental stimulation during brainstorming, while feelings of overload barely moved at all. That balance helps explain why early experimentation opened the process instead of freezing it with too many options. Hou argued that the next improvement should happen in the interface, not only in the raw image generator. "We would suggest that all people embrace AI in the brainstorming stage. In the implementation stage, we find that AI is still very helpful for those ordinary people, but it creates more work for expert designers," Hou said. That advice points toward systems that adapt to users, instead of forcing every user to adapt to the system. The paper's logic extends beyond posters, because many creative jobs also move from open exploration to disciplined execution. Writing, advertising, and product work all ask people to generate options first, then narrow them into something usable. Whenever AI expands the search without disturbing a practiced routine, people are more likely to feel helped than interrupted. Once the tool begins shaping the final form, the question becomes less about talent and more about control. These results came from graphic design tasks, so they do not settle how musicians, filmmakers, or architects will respond. Real projects can loop through many drafts, and the paper simplified that mess into two clear stages for comparison. Even so, the repeated finding across students and working professionals makes the central split hard to brush aside. Future studies will need to test whether better controls, different media, or team settings can ease the expert slowdown. AI looked strongest when it expanded possibility and weakest when it collided with trained routines, which recasts creativity as a sequence. Tools may work best when beginners can lean on automation and experts can decide exactly when assistance enters. Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates.
[3]
AI is making the one-person creative studio a reality
A creative director stares at a blank page at 8:07 a.m., coffee cooling beside a half-finished brief. Ten years ago, that page would have pulled in a crowd: copy, art, strategy, maybe a junior team to feed the room. Today, the room can fit in a laptop, and the first sparks arrive in seconds. A massive new experiment from the University of Montreal points to a clear turning point: generative AI now beats the average person on certain creativity tests, even with older models such as GPT-4 that are more than a year out of date. The implication for creative work feels immediate. GPT-4 already performs strongly on structured idea-generation tasks, and the study's scale makes that point hard to dismiss. Researchers compared leading systems to more than 100,000 people and found that some models exceeded average human scores on divergent linguistic creativity, using the Divergent Association Task. In plain terms, a machine can now produce plenty of original-feeling options on demand, especially when the task rewards variety and semantic distance. That is exactly what many professionals ask for during early-stage ideation: names, angles, taglines, hooks, framing, counterpoints, and starting structures. An older model can flood the table with options, then your judgment selects the few that fit brand voice, audience reality, and business constraints. That workflow already compresses hours into minutes, and it shows up in everyday behavior. My recent LinkedIn poll, which captures more recent models, illustrates that reality: 70 percent of respondents reported their primary use case for generative AI as research, analysis, and brainstorming. The key shift for leaders sits inside that word "primary." When brainstorming and related creative activities become the dominant use case, the tool is no longer a novelty. It becomes part of the operating system for creative work. Teams that once depended on a large volume of human draft labor start to depend on orchestration: prompt craft, iteration discipline, and a sharp creative brief. Indeed, the University of Montreal study showed that with better prompting and directions for the model, its creative output substantially improves. Creative leaders already tune humans by context and constraint. They will tune models the same way. Assistants generate options after direction. Partners push back, reframe and expand the search space with you. Newer models move toward partnership because they sustain longer threads, track intent more reliably, and generate richer alternatives across formats. The study extends beyond word lists into creative writing tasks such as haiku, plot summaries and short stories, and it still finds AI matching or exceeding average human work in some cases. That matters for professional output because modern creative rarely lives in a single lane. A campaign needs narrative, product truth, performance variants, visual direction and platform adaptations. Partnership also changes the emotional rhythm of creative work. The hardest part often involves momentum: the dead zone between the brief and the first compelling direction. A model that can generate 10 plausible campaign territories, then remix the best three into sharper versions, keeps the creator moving. You provide taste, ethics, positioning and audience empathy. The model provides relentless iteration. That pairing raises the "creative watts" per person. The study also underscores a ceiling for older models where top human creativity stays ahead, especially on richer work like poetry and storytelling. In practice, that ceiling becomes a map of where human advantage concentrates. The premium shifts toward high-level concepting, tonal mastery, and the ability to connect a brand to culture with precision. Those skills resemble direction more than production. As models improve, the human role grows more like a showrunner than a room full of scriptwriters. This is where staffing changes show up. A single creative lead equipped with multiple AI collaborators can cover territory that once required several specialists for first drafts. The work still calls for humans, yet the leverage per human rises. Fewer people can ship more, and that reality ripples through agencies and in-house studios. The Mad Men image of a packed room has always been partly theater. The real engine has been a small number of people who frame the problem well, spot the surprising angle, and shape the final artifact. AI makes that truth operational. Instead of assembling a full room to generate breadth, one person can simulate breadth through multiple model "personas," each tuned to a role: contrarian strategist, emotional storyteller, ruthless editor, and audience advocate. The new creative team becomes a human lead plus an ensemble of AI brainstorming partners. The study's top-line pattern supports this future: average performance rises, yet peak human creativity stays distinctive, especially among the most imaginative participants. For professional readers, that translates into a simple career equation. Routine ideation and first-pass drafting become abundant. Taste, originality and synthesis become scarce. Scarcity drives value. Organizations will respond with new process design. A creative lead can run tighter loops: brief, generate, evaluate, refine, test, and ship. Fewer handoffs reduce drift. Brand consistency improves because the same director guides more output. Speed increases because iteration happens in minutes. Budget reallocates from headcount toward talent density, tooling and review. The practical challenge becomes governance: quality control, originality standards and responsible use. Partnership demands a stronger brief, clearer constraints and sharper review instincts. It also demands a human who understands audience reality, business goals and brand stakes. AI can generate abundance; it cannot own accountability. The creative leader owns the call. Creative work is entering an era of compression. Older models already handle much of the early ideation workload, and newer models accelerate toward true partnership. That combination boosts creative productivity so dramatically that a smaller number of creatives can cover more ground, with higher expectations for judgment and originality. The future looks less like a crowded bullpen and more like a single high-leverage creator running an AI-powered studio, shipping better ideas faster and setting a new standard for what "creative capacity" means. Gleb Tsipursky, Ph.D., is CEO of the future-of-work consultancy Disaster Avoidance Experts. He is the author of "The Psychology of Generative AI Adoption" and Returning to the Office and Leading Hybrid and Remote Teams.
Share
Share
Copy Link
Major studies from Swansea University and the University of Houston reveal AI functions as a creative collaborator that enhances brainstorming by 76% in novelty. However, the technology can slow experienced designers during implementation, adding 57% more time as experts revise AI-generated output to match their practiced routines.
AI is reshaping how humans approach the creative process, moving beyond its reputation as a mere automation tool to become an active collaborator in design and ideation. Research from Swansea University involving more than 800 participants demonstrates that AI can enhance human creativity when people work alongside intelligent systems during creative design tasks
1
. The study used an AI-supported system employing MAP-Elites methodology to generate visual galleries filled with diverse car design possibilities, including highly effective concepts, unusual ideas, and intentionally flawed options.
Source: The Hill
Dr. Sean Walton, Turing Fellow and Associate Professor of Computer Science who led the study, explained that when people encountered AI-generated design suggestions, they spent more time on tasks, produced better designs, and felt more involved
1
. This finding challenges conventional thinking about AI design tools, suggesting the technology drives engagement and exploration rather than just productivity. Complementary research from the University of Montreal found that generative AI using older models like GPT-4 already exceeds average human scores on divergent linguistic creativity tests, specifically the Divergent Association Task, when compared against more than 100,000 people3
.The benefits of human-AI collaboration vary dramatically depending on where creators stand in creative workflows. Research from the University of Houston involving 192 students and 120 professionals revealed that AI raised early-stage scores by 76% in novelty, 24% in relevance, and 97% in complexity during the ideation stage
2
. These gains make AI brainstorming partners particularly valuable when people are still exploring possibilities before committing to a direction.
Source: ScienceDaily
However, the picture shifts during implementation. Assistant Professor Jinghui Hou from the University of Houston found that experienced designers using AI during the finishing stage spent 57% more time yet reached similar creativity scores
2
. Professional designers in a field study using Midjourney V6.1 spent about 14.6 extra minutes when AI entered only during implementation. This slowdown stems from expertise fixation, where years of training harden into practiced routines that clash with AI-generated output requiring translation back into familiar methods2
.The Swansea research highlights that variety in AI-generated output plays a crucial role in creative outcomes. Dr. Walton emphasized that participants responded most positively to galleries including a wide variety of ideas, including flawed ones, which helped them move beyond initial assumptions and explore a broader design space
1
. This structured diversity prevented early fixation and encouraged creative risk-taking, demonstrating that abundance helps during the search phase even if it becomes noise during finishing.
Source: Earth.com
Among lower-expertise students, implementation improved novelty, relevance, and complexity when AI arrived during later stages, suggesting the technology lowers barriers for novices and beginners while frustrating those with strong established habits
2
. Screen recordings showed expert designers adding elements and editing existing ones more frequently before settling on final versions, while less experienced creators could accept AI suggestions and keep moving toward workable results.Related Stories
The Swansea study, published in the ACM journal Transactions on Interactive Intelligent Systems, highlights problems with how AI design tools are typically assessed. Standard metrics often focus on simple behaviors such as click frequency or how often users copy AI suggestions, overlooking important aspects including how technology influences thoughts, emotions, and willingness to explore new ideas
1
. The researchers argue that AI systems should be evaluated using broader methods capturing these deeper effects on engagement and strategy.Professionals reported more mental stimulation during brainstorming while feelings of overload barely moved, helping explain why early experimentation opened the creative process instead of freezing it with too many options
2
. Most participants still carried roughly one option into the final stage even after exploring several machine-made possibilities, indicating AI expanded the number of ideas without trapping people in endless indecision.These findings point toward a future where the one-person creative studio becomes operational reality. A single creative lead equipped with multiple AI collaborators can cover territory that once required several specialists for first drafts, with the human providing taste, ethics, positioning and audience empathy while models provide relentless iteration
3
. The University of Montreal study showed that with better prompting and directions, AI creative output substantially improves, suggesting creative leaders will tune models the same way they tune human teams through context and constraint3
.Hou recommended that all people embrace AI in the brainstorming stage, while noting that in the implementation stage, AI remains helpful for ordinary people but creates more work for expert designers
2
. This advice points toward systems that adapt to users instead of forcing every user to adapt to the system. As AI becomes increasingly embedded in creative fields from engineering and architecture to music and game design, understanding how humans and intelligent systems work together becomes essential for maximizing both productivity and creative exploration1
.Summarized by
Navi
[1]