My fiction has been pirated to train generative AI systems owned by corporations worth billions of dollars. Because of that, like many writers and artists I was dismayed and angry when the news came out that the SF/F genre's premier convention, Worldcon, used ChatGPT to vet potential panelists.
Since my original report about all this came out last week, I've interviewed people involved in running the 2025 Seattle Worldcon to pull together a clearer picture of what exactly happened. Based on these interviews, it appears the use of ChatGPT was indeed limited to the vetting of panelists. However, I also learned the use of ChatGPT was not initially approved by Worldcon leadership. Instead, a lower-level Worldcon volunteer decided on their own to use the generative AI program. This was done in the belief the program could complete a time-intensive project when there were not enough volunteers to complete the job manually.
My interviews also uncovered a cultural disconnect in how Worldcon volunteers looked at the use of generative AI programs, with volunteers who are writers and artists strongly opposing the use of GenAI. However, other volunteers who hadn't been personally impacted by these programs saw the use of GenAI as a possible tool to streamline otherwise time-intensive work.
How Worldcon's ChatGPT Use Happened
Last week Kathy Bond, the chair of the 2025 Seattle Worldcon, and SunnyJim Morgan, the 2025 Worldcon division program head, released detailed statements on what happened with regards to the use of ChatGPT and how they will rectify this mistake.
In Bond's statement, she again apologized for the use of ChatGPT and specified that "no selected panelist was excluded based on information obtained through AI without human review and no selected panelist was chosen by AI."
She added,
"ChatGPT was used only in one instance of the convention planning process, specifically in the discovery of material to review after panelist selection had occurred.
"It was not used in any other setting, such as
deciding who to invite as a panelist
writing panel descriptions
drafting and scheduling our program
creating the Hugo Award Finalist list or announcement video
administering the process for Hugo Award nominations
publications
volunteer recruitment"
Bond said that in response to this mistake, the Seattle Worldcon will redo "the part of our program process that used ChatGPT, with that work being performed by new volunteers from outside our current team." They are also "reaching out to a few outside members of the community with prior experience in Worldcon programming to come in and perform an audit of our program process" while also working to improve the "shortcomings in our internal communications" that was revealed by this episode.
SunnyJim Morgan also apologized for the use of ChatGPT in the statement, saying
"OpenAI, as a company, has produced its tool by stealing from artists and writers in a way that is certainly immoral, and maybe outright illegal. When it was called to my attention that the vetting team was using this tool, it seemed they had found a solution to a large problem. I should have re-directed them to a different process. Using that tool was a mistake. I approved it, and I am sorry. As will be explained later, we are embarking on the process of re-doing the vetting stage for every invited panelist, completely without the use of generative AI tools."
SunnyJim's statement closed by making note of the following points:
"Track leads selected panelists, who were then vetted, only for disqualifying information
Applicants who were not selected were not vetted by AI
We did not pay for the searches done
Only the panelists' name, not their work or other identifying information was entered into the prompt
No panel descriptions, bios, or other output was generated by AI
Scheduling and selection of who is on which panel is being done entirely by people"
After the statements from Kathy Bond and SunnyJim Morgan were published, I reached out to a number of people involved in this year's Worldcon. I ended up interviewing four of the 30 program track leads along with a higher ranked Worldcon volunteer.
I also contacted Kathy Bond about doing an interview. Bond told me she couldn't comment on the record at this time beyond what she'd said in her public statement.
Almost all of the people I spoke with asked for their interviews to be either on background or off the record for various reasons, such as not being authorized to speak to the press. One person I interviewed was initially willing to be on the record, but during discussions this was changed. I am respecting these decisions and will not include any identifying information in this report.
Despite these restrictions, these interviews allowed me to confirm large portions of the statements from Kathy Bond and SunnyJim Morgan.
In particular, all of the program track leads I interviewed confirmed that none of them used any generative AI program to sort through the over 1,300 panelist applications they received or to make decisions on panelist selections. These track leads also told me the information submitted by these applicants was contained in a very large shared spreadsheet. Each of the 30 track leads was responsible for using the shared spreadsheet to pull together participants for individual programming tracks on subjects such Art, Fanfic, and Indie Publishing. To do this, each track lead manually searched through the spreadsheet for people who were either interested in their panel track or might fit with that track.
"The process was onerous," one track lead told me, saying it took more than six weeks to manually select the potential panelists for their programming track alone.
Once all the track leads had the panelists selected for their tracks, the list of names was sent to the vetting team. It's at this point things appeared to have gone off the rails with regards to the use of generative AI.
Evidently, the use of ChatGPT was not originally authorized by Bond or Morgan and was initially done without their knowledge by someone on the vetting team. The reason this appears to have happened is because the vetting team was short-staffed.
Originally, the vetting team consisted of six volunteers. However, when the time came to begin the vetting process, all but two of the team were no longer responding to communications or were unwilling to take part. With it taking 10 to 30 minutes per applicant to do the manual vetting searches, as Kathy Bond described in her original April 30th statement about the use of ChatGPT, this would have been an extremely large amount of work for only two people.
Because of concerns about being able to complete their assigned work, one of the vetting team members decided on their own to use ChatGPT as a time saving mechanism. As SunnyJim Morgan's subsequent statement on this described, the list of potential panelist names was fed into ChatGPT along with a specific search query. Any negative information returned about a possible panelist was then supposed to be manually confirmed.
It appears Worldcon leadership only learned that ChatGPT was used in this manner after the fact, when the person on the vetting team revealed what they'd done and said that there was no other way to complete the vetting with so few volunteers on the team. While Worldcon leadership had concerns about the use of generative AI, because ChatGPT had already been used – and because of the lack of needed volunteers on the vetting team – they decided to retroactively accept its use.
What came after is now well known: Word about the use of ChatGPT quickly spread among Worldcon volunteers and the larger genre community.
I'm told the person on the vetting team who originally decided to use ChatGPT is no longer involved in the vetting process.
Despite what happened, the programming track leads I interviewed said they were still fully confident of their work setting up the programming. They also opposed Kathy Bond or SunnyJim Morgan resigning over this issue, saying that while a mistake was made it has now been corrected.
"This was a dumb mistake, a disconnected mistake," one track lead said. "However, this wasn't the result of a systemic issue with this year's Worldcon."
Clashing Views of Generative AI
The track leads I interviewed said none of them learned that ChatGPT was being used to do the initial vetting until well after it happened. All of them also said they strongly opposed the use of generative AI to do this, with several saying their own works had been stolen and used by companies to train Large Language Model (LLM) programs like ChatGPT.
"When I first heard this happened, I thought I might step down because I didn't want to be associated with LLM use," one track lead said, who has had their own creative work pirated to train GenAI programs.
This track lead decided to stay on after deciding this was an isolated mistake made by well-meaning people trying to save time on a very labor-intensive process. But as evidenced by the resignations of three people last week from Worldcon because of the use of ChatGPT – including the Hugo Awards administrator Nicholas Whyte and Esther MacCallum-Stewart, the deputy Hugo administrator – others felt Worldcon had crossed a line and decided the only option was to step down from their volunteer roles.
One thing I found interesting is that while all the programming track leads I spoke with were very opposed to any use of generative AI by Worldcon, this view wasn't shared by all volunteers.
As one track lead said, "Many Worldcon volunteers are not writers or artists, so they have not had their work stolen. They don't understand on a personal level why even this limited use of ChatGPT was so bad."
"Some volunteers understand the outrage, others don't," another track lead said. "But that's similar to how the general public sees the use of generative AI."
It appears the volunteer who made the decision to use ChatGPT was not someone who had experienced their work being stolen.
Sometimes it's not evident to people who aren't artists or writers how much pain generative AI systems has caused to people in creative industries, let alone the potential loss of income. And while many writers and artists volunteer to work on large conventions like Worldcon, far more of the volunteers are simply SF/F fans.
In a recent essay on File770, Erin Underwood wrote an op-ed about the struggles faced by the volunteers who organize and run many of the SF/F genre's conventions. As Underwood said, it's easy to "recognize the appeal of using a tool like ChatGPT to help reduce a substantial, unpaid workload that is getting harder and harder to manage every year."
It takes a lot of volunteers to run a convention the size of Worldcon. On the Seattle Worldcon's "Committee and Staff" page, more than 200 volunteers are listed. While the convention chair and programming leads may receive the most attention from attendees, volunteers are needed in everything from finance, facilities and publications to events, technology and many other areas.
It's likely future conventions, especially large ones such as Worldcon, will need to have discussions with volunteers and reach a consensus in advance on whether or not to use generative AI systems to speed up work. Otherwise, some volunteers will be tempted to use these systems, even if the convention as a whole is opposed to their use.
The Programming Difficulties Faced by Volunteer-Run Conventions
The track leads I interviewed also said they have heard the complaints in the SF/F genre about people who did or didn't get selected for this year's Worldcon programming. Some famous authors weren't selected. Other writers who are less-well-known were.
All the track leads said they made their choices based on the information provided by applicants with a goal to create the best possible programming tracks. The track leads also talked with frustration about potential panelists who filled out the required panelist interest forms at the last minute, or authors who assumed everyone knew who they were so didn't provide much if any of the requested information.
In particular, one person I interviewed recommended that people read Erin Underwood entire File770 op-ed to understand the difficulties of choosing convention program participants.
As Underwood wrote,
"... one of the biggest issues that any convention program team faces (whether local, regional, or international) is figuring out who all these people are who are applying to be on the program. This is even more difficult since so many people complain about filling out longer, more detailed surveys. After years of running program teams or being a part of program teams, I can safely say that the information provided by potential program participants is often insufficient to make them stand out from all the other people who are also asking to be on the program."
Underwood also pointed out in the op-ed that conventions used to have long-term volunteers who knew all the people in the genre, so could easily assign people to programming based on their own knowledge. But those people have mostly retired, with their place taken by new volunteers who must rely on the information shared with them, or use their own more limited knowledge.
Per Underwood:
"As longer-term volunteers disappear, their wealth of knowledge also disappears. This leaves new volunteers who are largely only familiar with the people they have read, seen, or are familiar with in their more limited capacity as a newer volunteer. We NEED new volunteers, and they are often doing the very best they can with the experience and information they have available to them.
All of these were points made repeatedly to me by the people I interviewed.
As one higher ranked volunteer with Worldcon told me,
"I wish more people would share interesting facts about themselves when they apply to be on programming. Too many applicants merely provide a list of publications and assume that's enough to get picked for programming – it isn't. Every writer can take part in panels like 'Worldbuilding 101' or 'How to Create Compelling Characters.' But we only have a few panels like that. What we need are more applicants who share interesting, off-beat and in-depth information about themselves and their work. We need people who can help us fill out the entire schedule and be on panels such as 'Surviving the Zombie Apocalypse.'"
Conclusion
I remain opposed to the use of generative AI in most creative pursuits such as writing and the arts. I've written extensively on this subject. I also believe the corporations pushing this technology do not care about how GenAI harms individuals, societies, or the Earth. These companies likewise don't care that they built their programs by pirating the works of countless artists and writers without permission or payment, even though it was well within their means to both get permission and pay people.
Yes, Seattle Worldcon made a serious mistake using ChatGPt to vet potential panelists. However, based on what I've learned it's easy to understand how this mistake happened, especially in light of the issues faced by volunteer-run conventions. In addition, Seattle Worldcon leadership appears to be sincere in their apologies for using GenAI and committed to correcting what happened.
All that said, the issue of using GenAI to speed up the work of volunteer-run conventions will likely come up again in the future. Just as many businesses and governments are adopting GenAI to speed up their work processes, there will be increasing pressure from convention volunteers to use generative AI systems as an alternative to sinking entire days or weeks of their life into doing something ChatGPT can accomplish in mere minutes.
I don't know what the answer to all this might be, but I do know the corporations pushing generative AI want people to merely consider the time savings their programs provide and not the harm they also do. As the SF/F genre considers this issue in the future with regards to conventions, I hope both the pros and cons of GenAI are discussed and remembered.