Genre Grapevine on What AI Generated Art and Writing Might Mean for Artists and Authors
Note: this report is also available free to the public. If you like my writings on genre issues, consider backing my Patreon.
The other day I asked ChatGPT a simple question: “Will artificial intelligence replace writers?” Here’s the response:
It is possible that AI could be used to generate certain types of written content, such as news articles or product descriptions. However, it is unlikely that AI will be able to replace the creativity and unique perspectives that human writers bring to their work. Additionally, AI-generated content may not have the same level of nuance, emotional depth, and cultural understanding as content created by humans. Therefore, it is likely that AI will complement the work of writers rather than replace them.
ChatGPT, the so-called “artificial intelligence” chatbot from OpenAI, gave similarly worded responses when I asked how AI might affect artists. No matter how I reworked the question, ChatGPT crafted essentially the same optimistic take on what creative machine learning tools mean for our world.
But the reality of machine learning is, as expected, far more complex than ChatGPT would have us believe.
Creativity has always had the power to change our world for the better. For that reason, creativity has frequently scared the powerful, the wealthy, and those who aim to control our lives and our world.
I’ve been keeping that in mind as people crow over new advances in machine learning programs that appear to mimic human creativity. Many of the people creating these programs do so from good intentions and desires. And I agree that writers, artists and other creative professionals will be using these tools in the coming years.
But there’s also a massive difference between the ideals of those who create technology and how that technology is later used by those with power in this world. My fear is that large corporations, the powerful, the rich and the elite will try to use AI programs to edge artists, writers and other creatives out of the already underpaid positions they occupy in our world.
Whether they succeed is yet to be decided. To help prepare people for what is to come, in this report I’ll examine the current state of these technologies and give my best thoughts on what artists, writers, and other creatives need to consider as they prepare for the future.
Where We Are Now
AI art and writing programs have been around for much of the last decade. And while these programs are called artificial intelligence, there’s actually no intelligence behind them. As Ted Chiang said in 2021 in an excellent essay, despite what people think we're likely still a “long way off from being able to create a single human-equivalent A.I., let alone billions of them.”
Instead, these creative programs are examples of machine learning, using algorithms created from data samples (also called training data). And what these art and writing programs are being trained on are examples of previously created art and writing.
For most of the last decade the work produced by these programs was unimpressive – maybe a badly created picture here or a simple paragraph there. However, last year this changed dramatically. Machine learning art programs suddenly gained viral attention, with programs such as DALL-E 2, Stable Diffusion and Midjourney being used by millions of people to create artworks based on different word prompts. For many months you couldn’t go on social media without seeing friends and famous people sharing images they’d created using these art programs.
The programs are both easy to use and fun. Even if you couldn’t draw to save your life, you could now use them to create professional-looking art.
By the middle of last year artists and others began pushing back. Dante, a creative advisor at Riot Games, said in a viral tweet that “at the core of every ‘AI generated art’ program is mountains of stolen, unsourced art, fed like a grinder into a sophisticated algorithm that is, fundamentally, uncreative.” A few months later a thread from artist RJ Palmer gained even more attention, with Palmer pointing out that some of these AIs were being “explicitly trained on current working artists” and that one AI even tried to recreate the logo of the artist it was copying.
Perhaps the biggest wake-up call for artists came when a science fiction styled work of art created by Midjourney won the blue ribbon in the Colorado State Fair’s annual competition for emerging digital artists. Jason M. Allen created the work in Midjourney and even stated that when submitting his entry. The news received world-wide attention, as did Allen’s statement about winning: “I’m not going to apologize for it. I won, and I didn’t break any rules.”
And then ChatGPT debuted in November, creating a viral writing counterpart to the art programs.
With that the AI floodgates had been opened.
Is AI a New Creative Tool or Creativity’s Replacement?
During a recent panel on AI programs at the ConFusion convention in Detroit, John Scalzi said how all this turns out depends to a large degree on whether AI creativity programs are assisted tech or disruptive tech. Basically, will these developments turn out to be another tool artists and writers use to create their own works, or will machine learning be used to disrupt and possibly destroy the work opportunities and livelihoods of artists and writers.
“There’s so much you can do that is assistive to human creativity rather than usurping creativity,” Scalzi said.
As an example of assisted technology, several people interviewed for this report referenced the changes that swept photography beginning in the 1990s. First came the appearance of digital cameras, followed by cameras on smartphones like the iPhone. Then machine learning appeared and improved digital photography on cameras and smartphones. Initially photographers who used traditional photographic film to create images protested these changes. However, today photography is in many ways more popular than ever and few complain that an AI-styled system on your smartphone helps you take better photos.
One person who discussed this comparison to photography with me was Amit Gupta, a science fiction writer and co-founder of the AI-writing tool Sudowrite.
“The first few covers that used iPhone photos angered people because it wasn’t up to their standards,” Gupta said. “And now it’s matter of course. The iPhone isn’t making the photo, it’s the photographer. The same is true of AI tools. AI art tools like Midjourney are a process and a tool like any other. If you want to get it to do a specific thing there’s a learning curve, but eventually a lot of artists will use it as their toolset.”
There are already writers and artists using AI as one of the tools in their creativity toolset. One author I spoke with said they use Midjourney to help visualize scenes for their stories. Another described “talking” with ChatGPT to brainstorm ideas for stories. A third told me they’re trying to use AIs to create submission synopses of their books, a task many authors hate with the burning flames of a billion stars. (On a humorous note this person, who like a number of the people I talked with asked not to be identified because of possible career pushback for using these tools, told me they’d “Kiss AI ass every day” if AIs would write their synopses.)
All of that would fall under what Scalzi called using AI as assisted tech for creators.
When I think about how all this could mesh with our world, I wonder if artists and writers might eventually use machine learning in similar ways to how a Hollywood film director works. Where a director relies on a large staff to create films under their direction, perhaps an artist or writer might direct machine learning tools to create new images or stories. In short, the various AI tools working for an artist or writer would be their “staff.”
Writers and artists already use variations of machine learning, be it an author using Google to do research or an artist using different AI-informed filters to change how their art is viewed. So perhaps machine learning will be merely an extension of how people have always used tools to aid human creativity.
But that’s also an optimistic view on how all this turns out. As we’re seen in recent decades, when tech companies aim to “disrupt” an aspect of human life, the outcomes frequently benefit those companies and those in power at the expense of everyone else.
When Machine Learning Trains on Your Creations
A major reason artists and writers are suspicious of machine learning is because the groups creating many of these programs haven’t been very open about the art, photos and written works the AIs were trained on.
For example, OpenAI is the research laboratory that created DALL-E and ChatGPT, among other programs. The lab was founded in 2015 by a number of tech investors including Sam Altman, Elon Musk, Peter Thiel and others. Microsoft has also provided large investments to OpenAI, including $10 billion last month.
So far OpenAI hasn’t been very open about the works their programs are trained on.
The same with Midjourney, whose founder David Holz recently said he didn't seek consent from living artists or those with work still under copyright because it was essentially too hard to do that. And don’t think this is a small issue – in an interview with Forbes, Holt admitted Midjourney was trained on at least a hundred million images without consent.
Because these AIs were trained on works by living artists, this can result in the programs creating images based on their art. For example, Deb JJ Lee discovered that someone had crafted an AI model to create art similar to Lee’s own distinctive work. Worse, when Lee pushed back on their art being used in this way, they were accused of being a “gatekeeper.”
As Lee said, “I never hide how I draw. I teach classes and share *everything*, from my layer structure to my inspirations to Gradient mapping. At Lightbox this year I would show my original files to ppl who come to my table to demonstrate how I do everything. I’m the opposite of a gatekeeper.”
Despite that, Lee was essentially blamed by a number of supporters of AI programs for daring to question the use of their own art in the training of machine learning programs.
It’s almost like, as Alasdair Stuart said, “the entire system is powered by artists but devalues them in every way.”
We’ve seen more examples of artists dealing with issues like these because powerful image creation programs have been around longer. But with new writing programs like ChatGPT going viral and other similar programs coming online – including Google’s new Bard AI – writers will likely soon experience their own examples of an AI trained on their work.
And there have already been complaints that Sudowrite trained their AI on AO3 without permission.
However, in an interview Sudowrite co-founder Amit Gupta absolutely denied this. “We got pulled into this because (like many others) we use base models developed by OpenAI,” Gupta said. “We did not scrape AO3 or anyone else. We haven't trained on any material we don't have rights to.”
As Gupta told me, “OpenAI hasn’t been completely open about what they used for training material for GPT-3, chatGPT, 3.5, etc, but they likely trained on as much of the internet as they could.”
And to add to what Gupta said, I personally believe it’s quite possible OpenAI trained their AIs on a good deal of the fiction and other books published over recent decades. If OpenAI was more open with their methods, perhaps I’d think differently. But for now writers should assume the worst.
People are beginning to take a stand against all this. Artists like Sarah Andersen and Karla Ortiz pushed back against the recent Unstable Diffusion Kickstarter while bestselling author John Scalzi said he expected his publisher to use covers for his books that are “100% human-derived, even if stock art elements are used.”
Artists have also filed class action lawsuits against the creators of Stable Diffusion and Midjourney. And in November a group of computer programmers likewise sued Microsoft, GitHub, and OpenAI in a class action complaint accusing the companies of scraping their code to build GitHub’s AI-powered Copilot tool. And Getty Images recently announced a lawsuit against Stable Diffusion for using its images without a license.”
Why all this pushback? Because as Saladin Ahmed told me, “It's not a cute robot like Data creating art. Instead, it's corporations stealing our ideas and writing.”
Publishers and Corporations
In December Corey Brickley pointed out that the cover for the upcoming Tor Books release Fractal Noise was created using the AI art generator Midjourney. Soon after, Petrik Leo noticed another Tor Books cover that appeared to use AI generated art.
As Brickley said, it’s likely Tor Books “bought the rights to a stock image uploaded by someone who makes everything in Midjourney.”
Many publishers no longer directly commission artists to create cover art for their books. Instead, a designer finds stock images and manipulates those images to fit the book’s theme, a practice that still helps artists because the publisher must purchase the images being used (although it’s worth noting the cost of stock images is far less than the price of commissioning new art).
Tor later confirmed this is what happened and that they were unaware the image was created by AI. Tor also said that due to production constraints they’re moving ahead with the current cover.
I understand why Tor was reluctant to change the cover only a few months before the book’s release, and it does appear Tor originally didn’t know AI art was used to create aspects of the artwork. But that said, it is quite illustrating this happened with the cover art for Fractal Noise, an eagerly awaited sequel novel from bestselling author Christopher Paolini. After all, that means Tor didn’t pay to commission art for one of their major upcoming releases.
A decade or two ago, it would have been unthinkable for a major publisher not to commission original art for a tentpole release like this. And I fear this already indicates where many publishers will eventually go with AI art.
Or as artist Petar Penev said, it sets a precedent that AI art is okay to use on book covers.
So while some publishers like Chaosium are taking public stands against ever using AI art, what happens in a few years when major stock image companies are filled with millions of AI images? Major publishers are already resistant to paying their editors and staff a living wage, as evidenced by HarperCollins fighting so long against the modest demands of their striking staff members. It’s easy to believe that once the backlash to AI art dies down, at least a few of the major publishers will embrace AI creations with open arms because of the potential cost savings.
As John Scalzi said during that AI panel at ConFusion, “It’s not the tool, it’s the system in which the tool is generated. If the current tools knock the legs out from under creators, we can not and should not ignore the damage all this may do.”
Must We Commodify Creativity?
One thing that has long made it difficult for the powerful to completely harness creativity is the unpredictable nature of creation.
For example, hundreds of thousands of books are released each year by small and large publishers (along with millions of self-published titles). However, very few of these books are highly profitable, with major publishers instead relying on bestselling books for a large portion of their income.
But what happens if companies begin marketing machine learning programs to book publishers by claiming AI could easily create potential bestselling books for far less than authors are paid? Imagine if ChatGPT studied the writings of James Patterson or John Grisham and began cranking out new books licensed with their byline?
As Dan Whitehead said in a must-read thread discussing the companies behind many of these machine learning tools, “Here's the thing: they don't see creativity as an end in itself. Techbros just see a process, a manufacturing step in need of streamlining and automating. The aim isn't to make art, it's to create product. And making 180 books a year with AI instead of 2 with people is ‘better’.”
The major benefit of machine learning programs is the ability to create art and written copy extremely fast. Yes, these creations may not be ground-breaking art or a compelling story, but they’d be churned out very very quickly. It’s the infinite monkey theorem of infinite monkeys typing on a keyboard and eventually creating the complete works of William Shakespeare. Except these monkeys are trained on Shakespeare and creating Shakespeare mashups that publishers might decide to publish.
I don’t think everyone working on machine learning is trying to codify creativity. Amit Gupta and James Yu, the founders of Sudowrite, are both speculative fiction authors who seem to genuinely want to create a tool that helps their fellow writers. And I imagine more artist-driven machine learning tools similar to Sudowrite will emerge in the near future.
However, many of the other companies behind machine learning tools, including Midjourney and OpenAI, don’t appear to respect the creative experience. As Paris Marx said, too many of the people behind these efforts are so focused on the desire to automate all aspects of humanity that they miss the very point of art as human expression.
Gaming Out Possible AI Futures
Based on my discussions with people, here’s my attempt to game out possible futures that result from the emergence of machine learning creativity tools. Please note, though, that these “predictions” depend on the response to this issue from artists and writers and even society as a whole. Essentially, we all have the ability to determine which of these possible outcomes actually occur.
The need for original stories and art will survive
Machine learning tools will likely long lack the ability to create original works. And even if they eventually do cross that threshold, the works created by machine learning may find it difficult to speak to the deeper human condition. As a result, I predict writers and artists who create their own original works won’t be replaced. For example, it’s difficult to imagine machine learning being able to successfully engage in discovery writing in the coming decades.
Yes, writers and artists will use AI tools in their creations but that’s all they’ll be using, a tool. Machine learning is created and programmed in what came before. While writers and artists learn their craft based on reading and exploring previously created works, we then take our creations in unexpectedly new directions. That’s something machine learning will struggle to do.
As I’ve written before, creativity is a strange thing. Over 100 billion people have lived and died since humanity first appeared on Earth more than two million years ago. No two of those people lived the exact same life. Each one created things from their own unique outlook and experience while also being influenced by the culture and people around them.
Machine learning won’t change how each of us have stories or art or other things that only we can create.
Many writers and artists will use machine learning as a tool
AI programs will likely be used by many writers, artists and other people to aid their creative process. And just as digital photography is no longer seen as being a threat to photographers, this may be how machine learning tools come to be seen for artists and writers.
As I mentioned earlier, several writers I spoke with already described using ChatGPT to brainstorm ideas or Midjourney to visualize scenes. And Mushtaq Bilal recently described using ChatGPT as a “personal writing assistant” to transcribe and edit a speech. In the years to come artists and writers will likely discover many other ways to use these tools to aid their creative process.
We’ll be flooded with AI books and other works
As Amit Gupta told me, “There will be a flooding of the market with additional books and content of all kinds. We’ve seen that already with Amazon. There are already people pumping out Kindle books and AI will make that easier to do.”
I agree and fear this will affect prolific writers and artists the most. A writer who publishes a book every few weeks on Amazon sounds incredibly fast today, but how will they compete with an author using AI to publish a book every few days or even daily? And there are definitely many people who won’t care if the AI-derived books or art they want to consume are filled with cliches or basically retreads of previous artworks.
Writers and artists will still be involved in creating these works
However, even if AI is used to flood the market with books or art, I believe humans will still be involved in this process either as editors, designers, or co-authors and co-artists. For example, Matthew Claxton has read, and I quote, “hundreds of badly-written formulaic crap books over the years (sci-fi, fantasy, approx. 100 Sweet Valley Highs).” As Claxton said on Twitter in response to the arrival of machine learning tools, “I am… weirdly optimistic that AIs won't be very good at writing even shitty books.”
Read Claxton’s entire thread for more on why he’s optimistic, but in general I agree with him. The first few AI-written novels will be a novelty. But after that, many people will see how these works merely repeat what’s been created before. Even if you love reading the same types of stories or seeing the same art day after day, that will eventually burn out many people. They’ll crave something original that speaks to the human condition.
Obviously not everyone will react like there – there’s a reason “pulp fiction” has, despite evolutions in the delivery mechanisms of these stories, remained popular to this day. And machine learning will enable some writers and publishers to respond very quickly to changing market dynamics. If a book or graphic novel hits it big, expect machine learning to help competitors eventually crank out clones of that work within a few days or even hours.
Publications, publishers, and critics may grow in importance
While I worry about large corporations and publishers embracing the speed and cost savings of using AI creations over works created by authors and artists, I also believe machine learning will increase the importance of publishers and publications. After all, if machine learning enables stories and art to flood the marketplace, many people around the world will increasingly rely on publishers, magazines, critics and similar to help find the stories and art that are worth their time.
Powerful artists and writers may benefit the most
As one new writer told me (and requested anonymity to say this), people who can afford to outsource their work already do so. So why will using AI be any different?
For example, James Patterson is essentially a brand these days, relying on a seemingly endless stream of co-authors to help write his books. But what if Patterson could have a unique AI learn his writing style and help him publish a book every few days? Or what if the heirs of the estates of Tom Clancy or Michael Crichton did the same? Publishers could bring in editors or ghostwriters to clean up those stories and they’d have books that would likely sell pretty well, as Patterson’s co-authored titles already do.
It’s also possible popular artists could use AI to create their own publishing empires. A decade ago James Frey created a "fiction factory" where he paid other people $250 to write novels for him. With AI, popular authors could easily gain the ability to create fiction factories of their own.
Corporate franchises may lean-in heavy on AI
Corporations that own major media franchises like Star Wars, the Marvel Comics universe, Star Trek and Pokémon will likely heavily embrace AI. The #DisneyMustPay campaign has already shown that Disney is willing to not pay what they owe to the authors who wrote many of the core works supporting the company’s intellectual property. If corporations used AI to create works in their IP universes, they could then use editors or staff members to clean up the stories and publish them without having to pay the very authors they’re already resistant to paying.
I could also see Disney and other corporations creating their own “subscriber only” machine learning tools. Perhaps Disney+ subscribers would have access to a special Disney Princess AI where you can create your own unique stories to read. Or access a machine learning tool that crafts a unique animated story using Disney characters based on your prompts.
I mention this because many authors and artists earn a living writing tie-in novels or creating other works for large franchises. If major corporations go this route it would potentially hurt the livelihoods of a great many people.
It may be harder than ever for new writers and artists to break in
As John Scalzi said during that AI panel at ConFusion, one of his biggest fears about machine learning is that new writers and artists will be hurt the most. A writer or artist with name recognition can likely weather any changes unleashed by this changing landscape, but new writers and artists can’t do the same.
Or maybe new writers and artists will thrive
Of course it’s also possible new writers and artists will embrace machine learning in similar ways to how each upcoming generation successfully embraces new technologies. I can easily see new writers and artists thriving by using machine learning in ways more established writers and artists are unable or unwilling to do.
I have no clue which of these possible outcomes for new writers and artists are more likely, but I hope it’s this second path.
The bias in machine learning will become increasingly obvious
One topic that hasn’t gotten enough attention in all this is the amount of bias and racism embedded within machine learning tools. For example, DALL-E 2 and Stable Diffusion “struggle to create older couples of color, until you add the word poor.” And as Melissa Heikkilä at Technology Review wrote, because of her Asian heritage when she tried to create avatars of herself using the portrait-making program Lensa she was continually shown nude avatars or those with skimpy clothes and sexualized poses. Her white colleagues didn’t experience this to the same degree.
Such bias is unacceptable. Never forget these tools were created by people and trained on massive amounts of human writing and art. That means not only does the best aspects of the human experience exist within these tools, but also the worst.
Promoting your work on social media may become harder
If Twitter discourse currently turns you off or seems to be hitting every-new lows, imagine how machine learning tools trained on Twitter will turn out. AI has the potential to be used to regurgitate the worst aspect of human interactions across all social media platforms and I fear it will do exactly that. In addition, chatbots are already proving to be really good at creating disinformation, a trait that when spread to social media will make people trust those platforms even less.
Unfortunately, all of that may also make it harder for authors and artists to break through in an increasingly cluttered and untrusted social media.
Freelance opportunities will likely change
Many writers and artists support their personal creativity with freelance work in the business, technology, medical and other fields. However, it’s possible these freelance opportunities will quickly change as a result of machine learning.
Most corporations and businesses care more for saving money than for receiving the best work. This means a passable technical manual or promotional image created by AI may be seen as more cost-effective than hiring a freelancer to do the same work. We’re already seeing this in my employment field of journalism, where increasing numbers of press releases are being created by machine learning. More and more businesses are using these releases because they’re cheap and easy to create, even if they’re not as good as ones written by a human.
Instead of creating original works, perhaps many freelancers will transition to using AI to create larger numbers of works for their clients. Or perhaps some freelancers will transition to editing and cleaning up work created by AIs managed by corporations and businesses.
Even more demonization of creatives
Just as there are countless fanboys who cheer on Elon Musk and Tesla despite well-documented problems with the company, there are many people cheering machine learning and hoping it destroys writers and artists. At the ConFusion convention in Detroit, painter and photographer Rick Lieder said he’d seen posts online of people hoping for this very outcome.
Unfortunately, I fear this trend will accelerate with machine learning tools. Part of this is simply the divisive nature of today’s world. Part of it is a carry-over from the current culture wars. And partly, I suspect, it’s because many writers and artists are outsiders to the powers that be and an easy target for people to take potshots at.
A recent example of this was shared by Molly Crabapple, who pointed out that “Emad Mostaque, the hedge fund boss who just got $101 million VC money for his AI company built on stolen artwork, says that artists who oppose him are like aristocrats who try to keep poor people from learning to read.”
And as Jeff VanderMeer recently said in response to a discussion around machine learning, “Apparently people hate creatives and continue to think what we do is replaceable, doesn't require lots of practice, and has no value.”
While some of my predictions are optimistic, others are not. But here’s the most important part: We all have the ability to decide how this turns out!
It is not a given that AI will replace or destroy the livelihoods of writers, artists and other creatives. Society has the ability to decide the outcomes on all this. All of us have the ability to push for cultural norms, laws and rules regarding how these tools are used.
All of the possible changes caused by machine learning tools won’t exist within a vacuum. Instead, how these tools are used will depend on the larger system within which they exist.
Artists and writers have the ability to push back on the worst of these possible outcomes. We are currently seeing this in the science fiction and fantasy genre where writers, artists and readers are pushing publishers to not use AI art on book covers. And the numerous lawsuits mentioned earlier are just the tip of the iceberg regarding the legal pushback and complications that will emerge around AI in the coming years.
I believe the #DisneyMustPay movement is a good model for how artists and writers don’t simply have to accept the worst outcomes that may result from machine learning tools.
Those pushing back against AI will likely be called Luddites and worse. But while that term has come to mean someone opposed to automation or new technologies, it’s important to remember what Luddites were originally fighting for. As Molly Crabapple recently said, “1-The Luddites were a movement by skilled textile workers that smashed machines as a protest tactic to get better labor conditions from (exploitative) factories. 2-The Luddites failed because the bosses had them killed.”
Machine learning tools present many potential changes and issues for not only writers, artists and other creatives but also for society as a whole. Because of that, we all deserve a voice in how these tools are used.
After all, it’s not machine learning tools that threaten writers and artists – it’s how the larger system around us will use these tools that’s the threat.
I choose to be optimistic about all this. As Maurice Broaddus recently told me, “The market has always been horrible for writers and artists. But the way it's been terrible changes over time. What we do as writers and artists is we adapt. We change. We thrive.”
Because of machine learning, the years to come will see many potential changes in what it means to be an artist or a writer. But I intend to find a way to not only thrive as a writer, but also find ways to help our entire community of artists and writers to do the same.