Note: This is my new monthly column providing analysis and coverage related to generative AI. This column will run alongside my regular Genre Grapevine, meaning if you subscribe to my Substack you'll also have access to both the Genre and GenAI Grapevines.
Will GenAI Change How People Think and Experience the World?
I've written before that artists and writers are the canaries in the coal mine with regards to what the tech companies pushing generative AI systems plan for the coming years. Essentially, the threat genAI poses to the livelihoods of artists and writers will soon expand to numerous other areas of people's work and life.
But why did these corporations come after writers and artists first? Essentially, we're the low hanging fruit – our works were easy for corporations to access and pirate for training their AI systems. As an added bonus from the corporate point of view, most writers and artists are economically weak. Yes, there are artists and writers whose work has made them rich and powerful, but they're the exception not the rule.
And equally as important: while artists and writers may generally be economically weak, what we create is powerful. Stories and art change the way people think and experience the world around them. That ability is something the rich and powerful have long coveted and attempted to use for themselves.
However, art and stories aren't the only way to change how people think and see the world.
In Jason Koebler's new report "Teachers Are Not OK" on 404 Media, he discusses what teachers and university professors are seeing as their students increasingly use generative AI. It turns out many students in higher education are using programs like ChatGPT to complete assignments and assist them with school. In Koebler's column he describes teachers "trying to grade 'hybrid essays half written by students and half written by robots,' trying to teach Spanish to kids who don't know the meaning of the words they're trying to teach them in English, and students who use AI in the middle of conversation."
The end result? As Professor Robert W. Gehl at York University in Toronto says in the report:
"I think generative AI is incredibly destructive to our teaching of university students. We ask them to read, reflect upon, write about, and discuss ideas. That's all in service of our goal to help train them to be critical citizens. GenAI can simulate all of the steps: it can summarize readings, pull out key concepts, draft text, and even generate ideas for discussion. But that would be like going to the gym and asking a robot to lift weights for you."
Gehl is correct about the importance of higher education to individuals and society as a whole. Unfortunately, I believe the educational ideal Gehl describes – of schools and universities where students "read, reflect upon, write about, and discuss ideas" while being trained to be critical citizens – was already dying long before genAI came around.
In recent decades higher education has become extremely corporatized, with many colleges actually run similar to major corporations like Microsoft and Apple Inc. Not only has this resulted in ever-increasing tuition fees for students but it's also turned higher learning into an almost assembly line process. One result of this is that academic research has been facing increasing numbers of scandals as "furtive commercial entities around the world have industrialized the production, sale and dissemination of bogus scholarly research, undermining the literature that everyone from doctors to engineers rely on to make decisions about human lives."
This corporatization has also changed how people see the role of higher education, with many people believing a diploma or attending the right college is merely one more checklist to be marked off along their path to riches and power. And students aren't fools. They see all this. They know how their colleges are run. They know what's rewarded in society.
So of course students are using genAI as a shortcut to success.
In Koebler's report, Nathan Schmidt, a university lecturer and managing editor at Gamers With Glasses, perhaps cuts to the core of the issue:
"ChatGPT isn't its own, unique problem. It's a symptom of a totalizing cultural paradigm in which passive consumption and regurgitation of content becomes the status quo. It's a symptom of the world of TikTok and Instagram and perfecting your algorithm, in which some people are professionally deemed the 'content creators,' casting everyone else into the creatively bereft role of the content 'consumer.' And if that paradigm wins, as it certainly appears to be doing, pretty much everything that has been meaningful about human culture will be undone, in relatively short order."
Schmidt is correct that the "passive consumption and regurgitation of content" does threaten cultures around the world. How this plays out in higher education is a warning sign of what society and culture at large may soon be experiencing.
Education has long been one of the major ways society influences how people think and see the world around them. Despite my critique of the corporatization of higher education in recent decades, I believe many colleges and universities still aim to achieve this goal.
But when you add ChatGPT to the ongoing corporatization of higher education, then all bets are off.
Pushing Back When Lies Become Our Shared Language
Earlier this year, author and activist Rebecca Solnit wrote a powerful essay titled "To Use Their Language Is to Endorse Their Lies." While the essay was specifically about the language surrounding the Department of Government Efficiency (DOGE) and other of President Trump's actions, what she said also applies to how generative AI has been promoted.
As Solnit stated:
"Every crisis is in part a storytelling crisis, and the current one here in the US is also a language crisis. How we use the language and how we listen for lies that are in single words and phrases as well as in sentences or narratives is part of what we can and must do to resist the Trump Administration's authoritarian agenda. A single word can be a lie. For example, journalists are still calling what Musk and his child army have been doing an operation in pursuit of government efficiency. It is true that in a wink-wink jokey way DOGE is called the Department of Government Efficiency. It's also true that it's not a department, they demonstrably don't give a damn about efficiency, nor are they competent to produce it or pursuing it. It's about efficiency in the same way that the Ministry of Truth in Nineteen Eighty-Four was about truth."
As I wrote two years ago, a similar type of corporate doublespeak is used with regards to generative AI. There's no actual "intelligence" behind the generative artificial intelligence systems that are so popular today, with these systems instead being examples of machine learning using algorithms crafted from data samples. The same with using a term like "hallucination" when genAI systems make mistakes, as opposed to saying that the system produced errors, algorithmic junk, or gibberish.
George Orwell's famous novel 1984 has long been the standard to describe manipulative language like this, especially with his description of how newspeak works. However, Orwell's thoughts regarding the use of language to achieve power and control didn't anticipate how technologies such as social media could more fully use language to manipulate people.
Last year, Megan Garber's essay "What Orwell Didn't Anticipate" in The Atlantic explored this issue. In reference to Orwell's famous 1946 essay "Politics and the English Language," Garber writes that the essay "is a writing manual, primarily—a guide to making language that says what it means, and means what it says. It is also an argument. Clear language, Orwell suggests, is a semantic necessity as well as a moral one."
However, Garber then points out that the "essay, today, can read less as a rousing defense of the English language than as a prescient concession of defeat. 'Use clear language' cannot be our guide when clarity itself can be so elusive. Our words have not been honed into oblivion – on the contrary, new ones spring to life with giddy regularity – but they fail, all too often, in the same ways Newspeak does: They limit political possibilities, rather than expand them. They cede to cynicism. They saturate us in uncertainty. The words might mean what they say. They might not. They might describe shared truths; they might manipulate them. Language, the connective tissue of the body politic – that space where the collective 'we' matters so much – is losing its ability to fulfill its most basic duty: to communicate. To correlate. To connect us to the world, and to one another."
Unfortunately, I fear this has already happened with the language used to describe machine learning systems. For example, if you now use a term like Large Language Models (LLMs), which is the more correct term for many generative AI systems, many people will either not understand you or believe the words you're using are esoteric.
To a large degree, I agree with Rebecca Solnit that to use manipulative language is to endorse the lies of those who first used those words. This is why it's so important to push back against this false language use before it becomes established in the greater culture.
Unfortunately, I fear the terms generative AI and genAI have indeed become established. Trying to write or speak these days about machine learning and LLMs without using the words generative AI is difficult at best. At worst, it marks you with many audiences as Homer Simpson's dad yelling at clouds.
But where I disagree with Solnit is that it is possible to use manipulative words to push back against the manipulation, especially if the term being used has become widely known. One way to do this is with sarcasm or humor, the legendary tools for undercutting manipulative language. In many online forums like Reddit I've already seen people doing this by saying variations of "Did you write this with AI?" upon encountering poorly written or hard to understand screeds and rants.
Another way to undercut the use of the manipulative language around machine learning is to mix the terms used with Orwell's plea for clear language. If you write about genAI, continually remind your readers that there's no actual intelligence in the systems. If someone says ChatGPT hallucinated with its results, ask, "Do you mean it made yet another mistake?"
Is James Frey the Posterboy for GenAI?
Twenty-two years ago, James Frey was accused of lying in his memoir A Million Little Pieces. Fifteen years ago, Frey set up a questionable "fiction factory" where writers were paid $250 for novels along with a percentage of future revenue. Two years ago, Frey gave an interview with Centre Pompidou where he said he was using artificial intelligence to write a book called FOURSEVENTYSIX, adding "I use AI because I want to write the best book possible, and I'm prepared to use all the tools at my disposal to make it happen."
Because of all this history, it's not surprising that Frey's attracted controversy over whether Next to Heaven, his new novel about rich people doing extreme wrong things, was written with genAI systems. L.C. Whitehouse criticized the Book of the Month (BOTM) for selecting the novel while other such as LisaCantEven and Patrick Monahan also piled on, with Maris Kreizman adding "I would find the whole James Frey Bad Boy Redemption Tour so much less distasteful if he hadn't screwed over tons of writers and then bought a Porsche with the profits."
In an interview with Vanity Fair, Frey refuted accusations of using genAI.
He said, "I don't use generative AI to write ever, just so we're clear. I use AI as a writer the same way I used to use Google. I'm looking at my AI right now. … It helps me immensely because it's vastly faster than a search engine. But I don't use generative AI to actually compose sentences or put together the text. I mean, I guess I do use it to put together the text of the book. But when we were talking earlier about Next to Heaven, and I said, 'I would look up what's the most expensive silverware ever made,' AI just gives you the answer a lot fucking faster."
If you read that quote carefully, you may have noted Frey went from "I don't use generative AI to write ever" to "I guess I do use it to put together the text of the book." A charitable reading of his statements is that he doesn't use genAI except when he does. A less forgiving reading is that he's trying to Million Little Pieces himself into being the posterboy for writers using genAI.
Also worth noting what else Frey said in the interview two years ago:
"It was I, the writer, who decided what words were put on to the pages of this book, so despite the contributions of the AI, I still consider every word of this book to be mine. And I don't care if you don't."
Other News and Info
Lincoln Michel shared an interesting take on authors using genAI: "I find the ethical boundaries for authors and AI to be pretty simple honestly. Treat it like a human, for these purposes. You wouldn't have to credit someone you spit-balled ideas with or who proofread your work or who gave you some research advice. (Though you should thank them in the acknowledgements. I think you can skip that for a computer program.) OTOH, you would have to credit someone who wrote all or some of your text if you have any artistic integrity at least. Plenty of writers have found artistic uses for GenAI text in their books and disclosed that up front to readers. Why try to hide the use? Unless you know how you're doing it is artistically bankrupt…"
According to The Verge and The New York Times, "The Washington Post could soon allow non-professional writers to submit opinion columns using an AI writing coach known as Ember." Evidently this genAI system would "automate several functions normally provided by human editors" along with offering writers "developmental questions" and pointing out to them the "fundamental parts of a story, such as an 'early thesis,' 'supporting points,' and a 'memorable ending.'"
Investor Elad Gil helped seed early genAI companies and now he's eying a new opportunity to make money: "using AI to reinvent traditional businesses and scale them through roll-ups. The idea is to identify opportunities to buy mature, people-intensive outfits like law firms and other professional services firms, help them scale through AI, then use the improved margins to acquire other such enterprises and repeat the process." Translation: We're going to use AI to replace lawyers and all those other people who work in high-paid professional service firms.
Lawyers and other professional service employees aren't the only ones at risk from genAI – add fitness instructors to that list.
According to The Wall Street Journal, "A fast-growing technology known as ambient listening is taking over an onerous but necessary task in healthcare: documenting what happens in the doctor-patient encounter." Evidently the genAI system is being used at hospitals "to capture discussions at the bedside, update medical records, draft care plans and create discharge instructions."
On May 9, the U.S. Copyright Office released a "highly anticipated report on copyright and AI usage." As stated in coverage by NPR, the report said that "in some instances, using copyrighted material to train AI models could count as fair use. In other cases, it wouldn't." In what can only be an absolute coincidence and not President Trump sucking up to corporations and tech bros worth billions of dollars, the day after the release of the report the head of the Copyright Office, Shira Perlmutter, was fired. Perlmutter is suing to overturn her dismissal. Despite her firing, the report can still be downloaded from the Copyright Office.
Lena McDonald, a fantasy romance author, faced criticism and backlash "after readers discovered an AI-generated prompt accidentally left in the published version of her book." According to The Latin Times, this prompt was an "editing note embedded in chapter three of her book Darkhollow Academy: Year 2, referencing the style of another author."
The BBC through their online course system Maestro used genAI to reanimate Agatha Christie as a writing instructor. As stated on Christie's profile, "In a world-first, the bestselling novelist of all time offers you an unparalleled opportunity to learn the secrets behind her writing, in her own words. Made possible today by Agatha's family, an expert team of academics and cutting-edge audio and visual specialists, as if she were teaching you herself…" According to The Guardian, "the videos 'starring' the author, who died in 1976, have been made using AI-enhanced technology, licensed images and carefully restored audio recordings."
People online had a field day mocking newspapers such as the Chicago Sun-Times and The Philadelphia Inquirer after they published "a syndicated summer book list that includes made-up books by famous authors." Of the 15 books on the list – including works by such celebrated authors as Isabel Allende, Percival Everett, and Andy Weir – only 5 books were real. The rest were pure fantasy spun by genAI. According to NPR, the list was written by Marco Buscaglia of King Features and then distributed to client newspapers. In response to all this, Buscaglia said, "Huge mistake on my part and has nothing to do with the Sun-Times. They trust that the content they purchase is accurate and I betrayed that trust. It's on me 100 percent."
Kelly Jensen, the editor of Book Riot and a former librarian, said the fabricated summer reading list "is the future of book recommendations when libraries are defunded and dismantled. Trained professionals are removed in exchange for this made up, inaccurate garbage. Are you fighting yet?"
The day after the fake reading list was published, Chuck Tingle released a new e-book titled "The Last Algorithm: Pounded By The Fake Book That An AI Claimed I Wrote And Then The Chicago Sun-Times Printed As Fact."