AI and the Enshittification of Life, or My Year Wading Through the Slop of Generative Artificial Intelligence
Note: This report is a follow-up to my collected Genre Grapevine coverage on this topic from 2023, which was republished as the free e-book Creativity in the Age of Machine Learning.
###
For this year's total solar eclipse, my oldest son and I took a road trip with K. Tempest Bradford to the Skyline Drive-In Movie Theater in Shelbyville, Indiana. However, that wasn't our intended destination because we didn't have one. We deliberately half-assed the road trip and didn't pick a final destination in advance, instead proclaiming "Here's the zone of totality! We'll just drive to this general area and find a great spot!"
In short, we made choices about our road trip in real time.
And we did find a great spot. We passed this drive-in theater welcoming visitors in the middle of the countryside and decided what the hell. Pure synchronicity. We parked and waited for the eclipse. We talked about life and science fiction and people watched. Then the eclipse blew us away and we drove home, having had a great experience and a great road trip.
Our trip was organic. Natural. Human. In short, the trip was the exact opposite of what we'd have experienced if we'd asked ChatGPT or any other generative AI program to make the choices for us.
If 2023 was the year the companies behind generative AI conned the world into believing "artificial intelligence" had learned to be creative – spoiler: there's no intelligence in generative AI, and the creativity behind so-called machine learning is merely algorithms trained on the stolen work of writers and artists – then 2024 is the year when the companies and people behind generative AI showed the world how quickly these programs could engulf the internet with near-total enshittification. Examples of said enshittification ranged from Google's search engine telling people to put glue on pizza to large numbers of AI-generated images being used as propaganda in the recent US presidential election.
Unfortunately, this enshittification is poised to expand to all aspects of our lives in the near future.
One of the best analyses I've read this year about what AI-generated content is doing to our world is "The Internet's AI Slop Problem Is Only Going to Get Worse" by Max Read in New York Magazine.
This must-read report begins with a great line: "Slop started seeping into Neil Clarke's life in late 2022." Read then describes how Clarkesworld Magazine was overwhelmed last year with submissions created by large language models (LLMs), a type of generative AI program that outputs human-like text. These submissions were the result of videos by people "teaching side hustles and online jobs." Basically, grifters created videos showing people how they could make money using an LLM to "write" a science fiction story and then submit it to magazines like Clarkesworld.
Since then, conmen and hustlers have further weaponized generative AI, flooding all aspects of the internet with what Read calls "slop." As Read writes:
"These are prime examples of what is now known as slop: a term of art, akin to spam, for low-rent, scammy garbage generated by artificial intelligence and increasingly prevalent across the internet – and beyond. … In the nearly two years since, a rising tide of slop has begun to swamp most of what we think of as the internet, overrunning the biggest platforms with cheap fakes and drivel, seeming to crowd out human creativity and intentionality with weird AI crap. … (And) the slop tide threatens some of the key functions of the web, clogging search results with nonsense, overwhelming small institutions like Clarkesworld, and generally polluting the already fragile information ecosystem of the internet."
Again, read the entire article. There's tons of fascinating data points in Read's report, such as how this AI slop is hurting the ability of companies to train their large language models. Unfortunately, as Reads notes, if "LLM's training corpus contains at least 10 percent non-synthetic – that is, human – output, it can continue producing slop forever." And Read's article also notes other interesting facts, such as how one-tenth of academic papers now evidently include generative AI content or work.
But the most important point to me in Read's report is that all of this comes down to choices. Everything that's currently happening – from the AI slop flooding the internet to the companies using art and writing without permission to train generative AI programs to the conmen and scammers using these programs to transform our every online moment into prime grifting opportunities – all of these are deliberate choices people have made.
And these choices are leading us down a pretty damn obvious path to an ever-expanding enshittification of all our lives.
The Choices We Make, the Paths We Take
At the top of the list of people I believe have genuine insights into generative artificial intelligence is Ted Chiang. A few months ago, Chiang published an excellent new essay on the topic in The New Yorker titled "Why A.I. Isn't Going to Make Art."
As Chiang explains,
"Art is notoriously hard to define, and so are the differences between good art and bad art. But let me offer a generalization: art is something that results from making a lot of choices. This might be easiest to explain if we use fiction writing as an example. When you are writing fiction, you are – consciously or unconsciously – making a choice about almost every word you type; to oversimplify, we can imagine that a ten-thousand-word short story requires something on the order of ten thousand choices. When you give a generative-A.I. program a prompt, you are making very few choices; if you supply a hundred-word prompt, you have made on the order of a hundred choices."
Chiang wraps up his thought-experiment by pointing out that when a generative AI "creates" a story based on your prompt, it makes the choices you are avoiding. Generative AI does this in various ways, such as by taking "an average of the choices that other writers have made, as represented by text found on the Internet; that average is equivalent to the least interesting choices possible, which is why A.I.-generated text is often really bland. Another is to instruct the program to engage in style mimicry, emulating the choices made by a specific writer, which produces a highly derivative story. In neither case is it creating interesting art."
As Chiang points out, "The selling point of generative A.I. is that these programs generate vastly more than you put into them, and that is precisely what prevents them from being effective tools for artists. The companies promoting generative-A.I. programs claim that they will unleash creativity. In essence, they are saying that art can be all inspiration and no perspiration – but these things cannot be easily separated."
I agree with Chiang's view that true creativity involves choices. And if you aim to create something worth sharing with the world, you're likely going to have to make a ton of hard choices.
A recent skeet by award-winning author John Wiswell illuminates merely one example of the choices that result from true acts of creation:
"The phrase 'AI slop' is a great example of why AI sucks. LLMs don't turn phrases or contemplate the emotional effects of words. An LLM couldn't have given us a term for why its work is so appalling that equals 'AI slop,' something people just riffed their way into."
Wiswell is correct: no LLM could come up with a phrase such as AI slop. Or a new word like enshittification, which was coined by Cory Doctorow and is a perfect example of a powerful word that's also expansive, allowing it to encompass so much more than Doctorow originally intended.
And knowing which words to use to express certain emotions, moods, situations, beliefs and so much more are merely one of countless choices a writer must make.
The power of art and literature comes from the choices made during the process of creation. One reason art and literature resonate with people around the world is because this process – the choices taken, the connections built, the paths followed – mirrors how our own lives are lived. Our lives result from countless choices made not only by ourselves but by our families, friends, acquaintances, and many people we've never met. The choices made by the elite, rich and powerful affect our lives, as do choices by leaders and builders in the communities around us. Even the choices made by our ancestors and every person who lived centuries and millennia before us still echo through our lives.
All these choices – including choices we have control over and those we don't – help create our paths in life.
The choices artists and writers make with their creations resonate with people because on both a conscious and unconscious level we see our own lives reflected in the world's art and literature. And the paths artists and writers create are then experienced by people around the world, building connections between all of us as people read fiction, watch movies, view paintings, swipe through photographs on their phones, and experience art and literature in numerous other ways.
Generative AI programs don't make the true choices that result in any of this. As Chiang said, generative AI provides an average of choices, as determined by algorithms. Or gives basic mimicry from works already created by humans.
The result of such averages and mimicry is mediocrity. Unfortunately, for many people in today's world such mediocrity is acceptable as long as they receive merely adequate results quickly and cheaply. Hell, "quickly and cheaply giving people the merely adequate" might as well be the slogan of every generative AI program out there.
If you want to create something truly worthwhile, delegating your creativity to a "merely adequate" tool because it's quick and cheap isn't simply making a bad choice. It's choosing to largely remove yourself from the entire creative process.
A Year of Pushing Back Against Generative AI
When I first started covering generative AI, a very loud cohort of tech bros and online fans said these programs would level the creative playing field and allow anyone to create art and stories. Artists and authors were derided by these people as some all-powerful elite who needed to be taken down and have their livelihoods disrupted. One venture capitalist even attacked artists who complained about generative AI being trained on their art without permission or compensation, essentially comparing these artists to "aristocrats who try to keep poor people from learning to read."
Two years later, it's obvious little of that hype came true.
Part of this is because the limitations of these tools are being increasingly revealed as the technology hits a "brick wall." In an example of these limitations, The Wall Street Journal reported this month that "OpenAI's new artificial-intelligence project is behind schedule and running up huge bills. It isn't clear when – or if – it'll work. There may not be enough data in the world to make it smart enough."
Essentially, the current generation of generative AI vacuumed up the low hanging online fruit to make their algorithmic programs function. Taking a similar approach to the next generation of these programs doesn't appear to be working. Because of this, it's possible we won't see a major evolution of the current generative AI programs for a long time. Instead, these programs may end up being tweaked and improved in similar ways to how we haven't seen radical changes to computer graphical operating systems since the release of Microsoft Windows 3.0 nearly 35 years ago. Someone who used Windows 3.0 back in 1990 would still generally understand how to use current operating systems, even if performance has been greatly improved and new features like touchable screens implemented.
In addition, 2024 showed that people are increasingly willing to not only push back against having their works used to train these programs, but they also want to set the terms for how the products of generative AI are presented and accepted by the world at large.
Readers have been quick to call out books that try to pass off AI-generated art as human created, as happened in February to Tor Books over the cover art for Gothikana by RuNyx, which incorporated "AI-generated assets in its design." And in June, DC pulled variant comic book covers by Francesco Mattina after complaints that part of the artwork appeared to use generative AI. And in August the trailer for Megalopolis came under fire after people determined it contained fake film critic quotes generated by AI.
This pushback is helping set a cultural standard among artists, writers, readers and the general public that it is unacceptable to use generative AI content in works claimed to be created by humans. And even when generative AI content slips in without the knowledge of an artist or author, the overall emerging standard is still reinforced. We saw this happen with John Scalzi in June after the cover to the Italian edition of his novel Starter Villain turned out to be created using AI-generated stock art, despite Scalzi having a policy stating this absolutely wasn't allowed. In response, Scalzi released a new statement on not using AI art to illustrate his books, noting this is a "hard contractual point" for him.
Even as we see the rise of new cultural standards against the use of generative AI content, technologies that reveal when this content is used are also being created and tested. As the journal Nature described in a recent editorial, "Scientists are closing in on a tool that can reliably identify AI-generated text without affecting the user's experience." A paper in that issue of the journal described one such potential watermarking program, which Google Deepmind has rolled out at SynthID. This program allows text created by large language models to be readily identified. Google Deepmind has also released the tool as open source, meaning it may be rapidly integrated and used by developers around the world.
This ability to identify written generative AI output follows on several tools released in 2023 such as Nightshade and Glaze, which allowed artists to "poison" their works with false image information. Artists using these programs could then corrupt the databases of any generative AI trained on their art.
And then there are the lawsuits filed by artists, writers, news organizations and others against the companies that trained their generative AIs on copyrighted works without permission. As venture capitalist Josh Harlan complained about in a recent Wall Street Journal op-ed, there are fears these lawsuits could derail the economic potential of even the current generation of programs. The potential of this was shown in August when a Federal judge allowed a lawsuit by artists against generative AI art generator programs to move forward. The judge "found that Stable Diffusion, Stability's AI tool that can create hyperrealistic images in response to a prompt of just a few words, may have been 'built to a significant extent on copyrighted works' and created with the intent to 'facilitate' infringement."
However, even if the companies that created generative AI programs end up having to pay artists, authors and others for illegal use of their copyrighted material – and even if the current generation of this technology does stagnate – the virtual cat is already out of the bag.
In Scazli's note about the AI art used for Starter Villain, he said it is likely "that what the definition of 'AI-generated' is will change over time," adding that there is "a distinct creative difference between using these programs as tools to foster human creativity, and using these programs to substitute for human creativity."
I agree. The current generation of generative AI programs aren't going anywhere, and it's likely in a decade or two writers and artists will use these tools similar to how I'm using a keypad to write these words.
But for that to happen, the original sins in the training of generative AI must be addressed. And we'll also have to find a way to deal with the tsunami of slop these programs have unleashed.
Wading Through 24/7 Slop
A few months after my eclipse road trip I took another one, this time across Ohio. I stopped at a number of places including Old Man's Cave, a beautiful ravine and overhang made famous in Jeff Smith's graphic novel Bone. I'd recently reread the Bone series and wanted to visit this place that had been chosen to exist within such a wonderful work of art and fiction.
Choices, as they say.
Along the way I stopped to eat at a White Castle. One of the choices I make in my life is to eat sliders on road trips. But this time a new reality slapped me upside the head as instead of talking to a real person at the drive-thru, I instead had to deal with White Castle's new AI.
A large white screen greeted me at the drive thru announcing the "Julia artificial intelligence (AI) experience." Underneath those words, I was told that "White Castle and its subcontractors, including without limitation SoundHound, will capture, collect, store, share, and use an audio recording of your voice. The specific processing purposes, and the length of term the voice audio recording is being collected, stored, shared and used is available at the below link and QR code, along with the retention schedule and guidelines for permanently destroying the audio recording."
In order to place my order – ie, "To agree and proceed" – I was told to say, "Ok, I'm ready." This felt strangely like clicking one of those crappy end-user license agreements before you're allowed to use some software or app, which isn't what I'd expected to do at a White Castle drive-thru.
White Castle's AI system worked okay. In some ways the system forces you to conform to its expectations of the words you use. Also, the AI works well if you order what most people are expected to order, such as one of the combos. But when I asked for extra ice in my iced tea, the AI decided I wanted an additional drink. It took some back and forth to get that corrected.
Among fast-food chains, White Castle is known for treating its employees decently. All their restaurants are company-owned, meaning no franchises (which are where a lot of the low pay and abuses in the fast food industry happens). White Castle also appears to pay its employees higher wages and better benefits than many other fast food restaurants. So if a somewhat decent company like White Castle is embracing AI, be certain that every business out there will soon do the same.
An article about White Castle's AI starts by proclaiming that for the brand, the "future is all about connection, flexibility, and a commitment to guest and employee experience." But that's merely marketing bullshit. Businesses are hoping generative AI can help them cut down on employee costs. We've seen this pattern with rich people, businesses and corporations for centuries, so why would it be any different now?
And we're already seeing the tsunami of slop flowing our way. Among the AI slop I waded into this year were:
A report in the Columbia Journalism Review, which found that media outlets such as Politico and Wired that partnered with ChatGPT's new search function had "inaccurate representations" of their content presented by the generative AI. And not only was this information frequently wrong, it was "confidently wrong," meaning it not only misled users but also possibly hurt the reputations of these media outlets. After a presidential election in the United States where disinformation and lies were so easily spread, all we need is for generative AI to further erode people's trust in media outlets.
Uncanny Magazine reported they are "seeing more and more submissions by writers with publication records who are almost certainly using AI to create initial drafts and then are trying and failing to edit things into shape."
The tech bros behind the startup Spines want to disrupt the publishing industry by releasing "up to 8,000 books next year using AI." The company "will charge authors between $1,200 and $5,000 to have their books edited, proofread, formatted, designed and distributed with the help of AI." As Maria Tureaud pointed out about all this, "A vanity press is still a vanity press. Remember...AI can't edit your book the way a human can...nor can it write a book the way you can."
Grifters publishing AI-generated books and images "are threatening the nearly 500-year-old art of lace making" by providing wrong information on creating the difficult artform. This is sparking fears that beginners "will be scammed by AI-generated books that contain no real information about the techniques and give up in frustration."
There are companies attempting to create virtual resurrections of the dead, meaning you don't need to grieve when your mother passes away because a generative AI trained on her life will allow you to talk and interact with her forever.
An Australian doctor involved in the right-to-die movement wants to use machine learning to gain a person's consent to kill themself, saying "We really want to develop that part of the process so that a person can have their mental capacity assessed by the software, rather than ... spending half an hour with a psychiatrist."
And in a sign that generative AI may even drive our inner lives toward mediocrity, a recent ad from Apple Intelligence urged people to "change your tone." In the ad, an employee writes an angry, scathing email before one click of Apple Intelligence's "friendly" button "transforms his fuming diatribe into a wordy, yet polite, four-paragraph missive."
And all that preceding AI slop doesn't even touch upon deepfake images of politicians and celebrities, deepfake revenge porn, scammers using generative AI to impersonate the voices of loved ones, face-swapping software so you won't know who you're talking to on a video chat, and much, much more.
As Max Read writes in his "AI Slop Problem" report, "When you look through the reams of slop across the internet, AI seems less like a terrifying apocalyptic machine-god, ready to drag us into a new era of tech, and more like the apotheosis of the smartphone age — the perfect internet marketer's tool, precision-built to serve the disposable, lowest-common-denominator demands of the infinite scroll."
In that report, Read also notes that even if you turn off your computer or phone to avoid this slop, it will still leak into your life.
I recently spoke with engineer and science fiction author Wole Talabi, who has reflected on the potential of generative AI. Talabi is disturbed by what these companies have done with the technology by trying to impose it, unrestrained, on creative industries where it is neither needed nor welcome.
Referencing the Gartner hype cycle, Talabi thinks we're past the "peak of inflated expectations" with regards to generative AI and are nearing the "trough of disillusionment." But according to Talabi, that doesn't mean the companies doing all this will move on to the golden lands of the "slope of enlightenment" or the "plateau of productivity" where, if they can find an honest and equitable way to address the "original sin" of data theft for model training, there may be some other human-centric potential for the technology.
I believe Talabi is correct. In the years to come, we will likely see some applications where generative AIs, and in particular large language models, improve human lives. And writers and artists may eventually use these programs to foster and improve their creativity in ways we can't even comprehend today.
But to get to this potential goodness, we first need to be prepared to swim through oceans of enshittification.