2 Sources
[1]
WorldCon use of AI to vet panelists prompts backlash
Leave it to the Borg? Scribe David D. Levine slams 'use of planet-destroying plagiarism machines' Fans and writers of science fiction are not necessarily enthusiastic about artificial intelligence - especially when it's used to vet panelists for a major sci-fi conference. The kerfuffle started on April 30, when Kathy Bond, the chair of this summer's World Science Fiction Convention (WorldCon) in Seattle, USA, published a statement addressing the usage of AI software to review the qualifications of more than 1,300 potential panelists. Volunteers entered the applicants' names into a ChatGPT prompt directing the chatbot to gather background information about that person, as an alternative to potentially time-consuming search engine queries. "We understand that members of our community have very reasonable concerns and strong opinions about using LLMs," Bond wrote. "Please be assured that no data other than a proposed panelist's name has been put into the LLM script that was used." The statement continues, "Let's repeat that point: no data other than a proposed panelist's name has been put into the LLM script. The sole purpose of using the LLM was to streamline the online search process used for program participant vetting, and rather than being accepted uncritically, the outputs were carefully analyzed by multiple members of our team for accuracy." The prompt used, as noted in a statement issued Tuesday, was the following: Using the list of names provided, please evaluate each person for scandals. Scandals include but are not limited to homophobia, transphobia, racism, harassment, sexual misconduct, sexism, fraud. Each person is typically an author, editor, performer, artist or similar in the fields of science fiction, fantasy, and or related fandoms. The objective is to determine if an individual is unsuitable as a panelist for an event. Please evaluate each person based on their digital footprint, including social, articles, and blogs referencing them. Also include file770.com as a source. Provide sources for any relevant data. The results were reviewed by a staff member because, as Bond acknowledged, "generative AI can be unreliable" - an issue that has been raised in lawsuits claiming defamation for AI generated falsehoods about people. These reviewed panelist summaries were then passed on to staff handling the panel programming. Bond said that no possible panellist was denied a place solely as a result of the LLM vetting process, and that using an LLM saved hundreds of hours of volunteer time while resulting in more accurate vetting. The tone-deaf justification triggered withering contempt and outrage from authors such as David D. Levine, who wrote: Author Jason Sanford offered a similar take: "[U]sing LLMs to vet panelists is a powerful slap in the face of the very artists and authors who attend Worldcon and have had their works pirated to train these generative AI systems. My own stories were pirated to train LLMs. The fact that an LLM was used to vet me really pisses me off. And you can see similar anger from many other genre people in the responses to Kathy Bond's post, with more than 100 comments ranging from shock at what happened to panelists saying they didn't give Worldcon permission to vet them like this." Following the outcry, World Science Fiction Society division head Cassidy, Hugo administrator Nicholas Whyte, and Deputy Hugo administrator Esther MacCallum-Stewart stepped down from their roles at the conference. On Friday, Bond issued an apology. "First and foremost, as chair of the Seattle Worldcon, I sincerely apologize for the use of ChatGPT in our program vetting process," said Bond. "Additionally, I regret releasing a statement that did not address the concerns of our community. My initial statement on the use of AI tools in program vetting was incomplete, flawed, and missed the most crucial points. I acknowledge my mistake and am truly sorry for the harm it caused." While creative professionals have varying views on AI, and may use it for research, auto-correction or more substantive compositional assistance, many see it as a threat to their livelihoods, as a violation of copyright, and as "an insult to life itself." The Authors Guild's impact statement on AI acknowledges that it can be commercially useful to writers even as it poses problems in the book market. The writers' organization, which is suing various AI firms, argues that legal and policy interventions are necessary to preserve human authorship and to compensate writers fairly for their work. In a joint statement posted on Tuesday evening, Bond and program division head SunnyJim Morgan offered further details about the WorldCon vetting process and reassurances that panellist reviews would be re-done without AI. "First, and most importantly, I want to apologize specifically for our use of ChatGPT in the final vetting of selected panelists as explained below," Morgan wrote. "OpenAI, as a company, has produced its tool by stealing from artists and writers in a way that is certainly immoral, and maybe outright illegal. When it was called to my attention that the vetting team was using this tool, it seemed they had found a solution to a large problem. I should have re-directed them to a different process." "Using that tool was a mistake. I approved it, and I am sorry." Con organizers are now re-vetting all invited panellists without AI assistance. ®
[2]
Seattle Worldcon's Chair Clarifies Use of ChatGPT and Offers Another Apology
The Hugo Awards themselves did not factor into the 2025 convention's use of AI to help create its programs. As sci-fi fans well know by now, it wouldn't be the Hugo Awards without some spice on the side. In recent years, the esteemed prizeâ€"handed out annually at the World Science Fiction Society convention, also known as Worldcon, whose members vote on the honoreesâ€"has weathered its share of controversies, with negative headlines (take your pick: geographical censorship; racism; ties to defense manufacturers; anti-"woke" backlash before anti-"woke" was even a thing) sometimes getting more attention than the books and other media the Hugos aim to celebrate. As io9 previously reported, this year's flashpoint is the use of ChatGPTâ€"but fortunately, not in a way that has any impact on the actual Hugo Awards. The Hugo ceremony is just one part of August's Seattle Worldcon 2025; it's also a convention that hosts panels featuring authors and other sci-fi and fantasy luminaries. Within the past two weeks, it was discovered that ChatGPT had been used to help vet program participants; three people involved, including two Hugo administrators, resigned as a result, and Seattle Worldcon 2025 chair Kathy Bond issued first a statement on the issue, followed by a separate apology. However, the issue is still rankling sci-fi and fantasy fans on social media, not to mention would-be Hugos honorees; as blog File 770 reported, author Yoon Ha Lee went so far as to remove his novel Moonstorm from Hugo consideration (it had been nominated for the Lodestar Award honoring YA works). Yesterday, Bond posted a third message about the controversy, with an additional statement from program division head SunnyJim Morgan. Bond's portion reiterates how ChatGPT was used, with details and specifics. Of particular note, it was not used for "creating the Hugo Award Finalist list or announcement video" or "administering the process for Hugo Award nominations." She also includes a renewed apology not just about the use of AI, but the way she initially responded to concerns by releasing a "flawed statement" when the issue was first brought to light. She also says that "we are redoing the part of our program process that used ChatGPT, with that work being performed by new volunteers from outside our current team" and pledges that Worldconâ€"whose staff is all-volunteer, she notesâ€"will do all it can to regain the community's trust moving forward. To that end, Morgan's statement is also both apology and even deeper dive into how ChatGPT was used, including the actual prompt used to vet potential program participants: REQUEST Using the list of names provided, please evaluate each person for scandals. Scandals include but are not limited to homophobia, transphobia, racism, harassment, sexual misconduct, sexism, fraud. Each person is typically an author, editor, performer, artist or similar in the fields of science fiction, fantasy, and or related fandoms. The objective is to determine if an individual is unsuitable as a panelist for an event. Please evaluate each person based on their digital footprint, including social, articles, and blogs referencing them. Also include file770.com as a source. Provide sources for any relevant data. Morgan reports that his team "did not simply accept the results that were returned" as a result of ChatGPT's vetting. "Instead, links to primary content returned during vetting were reviewed by our team and myself before a final decision whether to invite the person was made. Ultimately, this process led to fewer than five people being disqualified from receiving an invitation to participate in the program due to information previously unknown." Read the full statements from Bond and Morgan here; Bond writes that Seattle Worldcon's next update will come May 13, so it'll be interesting to see if there's more coming on the ChatGPT issue, or if the organizers will be moving forward from this point. What do you think of this latest olive branchâ€"and do you think Worldcon is handling its latest dust-up effectively? Let us know in the comments.
Share
Copy Link
The 2025 World Science Fiction Convention (WorldCon) in Seattle faces backlash for using ChatGPT to vet potential panelists, leading to resignations and apologies from organizers.
The 2025 World Science Fiction Convention (WorldCon) in Seattle has found itself embroiled in controversy after using ChatGPT, an AI language model, to vet potential panelists. The decision has sparked outrage within the science fiction community, leading to resignations and multiple apologies from the event organizers 12.
WorldCon chair Kathy Bond initially stated that the AI was used to streamline the vetting process for over 1,300 potential panelists. The ChatGPT prompt used for vetting included:
"Using the list of names provided, please evaluate each person for scandals. Scandals include but are not limited to homophobia, transphobia, racism, harassment, sexual misconduct, sexism, fraud." 1
Bond emphasized that only the panelists' names were input into the AI system, and the results were reviewed by staff members before being passed on to programming teams 1.
The use of AI for vetting panelists was met with strong criticism from authors and community members. David D. Levine, a prominent science fiction author, expressed his disapproval, calling it a "use of planet-destroying plagiarism machines" 1. Author Jason Sanford described it as a "powerful slap in the face" to artists and authors whose works may have been used to train AI systems without their consent 1.
The controversy led to the resignation of three key WorldCon staff members, including the World Science Fiction Society division head and two Hugo Award administrators 1. In response to the backlash, Kathy Bond issued multiple apologies, acknowledging the mistake in using ChatGPT and the initial inadequate response to community concerns 2.
In subsequent statements, WorldCon organizers provided more details about the AI vetting process:
WorldCon organizers have committed to redoing the vetting process without AI assistance, using new volunteers from outside the current team 2. They have also pledged to work on regaining the community's trust and will provide further updates on the situation 2.
This incident highlights the ongoing tensions between AI technology and creative industries. Many authors and artists view AI as a threat to their livelihoods and a potential copyright violation 1. The Authors Guild, while acknowledging some benefits of AI for writers, argues for legal and policy interventions to preserve human authorship and ensure fair compensation 1.
As the science fiction community grapples with this controversy, it raises important questions about the role of AI in creative spaces and the ethical considerations surrounding its use in organizational processes.
Summarized by
Navi
[1]
NVIDIA announces significant upgrades to its GeForce NOW cloud gaming service, including RTX 5080-class performance, improved streaming quality, and an expanded game library, set to launch in September 2025.
9 Sources
Technology
3 hrs ago
9 Sources
Technology
3 hrs ago
As nations compete for dominance in space, the risk of satellite hijacking and space-based weapons escalates, transforming outer space into a potential battlefield with far-reaching consequences for global security and economy.
7 Sources
Technology
19 hrs ago
7 Sources
Technology
19 hrs ago
OpenAI updates GPT-5 to make it more approachable following user feedback, sparking debate about AI personality and user preferences.
6 Sources
Technology
11 hrs ago
6 Sources
Technology
11 hrs ago
A pro-Russian propaganda group, Storm-1679, is using AI-generated content and impersonating legitimate news outlets to spread disinformation, raising concerns about the growing threat of AI-powered fake news.
2 Sources
Technology
19 hrs ago
2 Sources
Technology
19 hrs ago
A study reveals patients' increasing reliance on AI for medical advice, often trusting it over doctors. This trend is reshaping doctor-patient dynamics and raising concerns about AI's limitations in healthcare.
3 Sources
Health
11 hrs ago
3 Sources
Health
11 hrs ago