Dispatch from the Trenches of the Butlerian Jihad
Trying to teach in the age of the AI homework machine.
Last summer I made the case for bringing the principle of Dune’s Butlerian Jihad — “Thou shalt not make a machine in the likeness of a human mind” — to our broader discourse on AI. It seemed like a good way to bind together the various felt and thought objections to AI into a common credo. And a good way to distinguish between benign forms of so-called “AI” (spotting tumors, for instance) and the sycophantic imitations of humanity being peddled by the various broligarchs.
Since then, this “hard no” movement against AI has started to take shape. For one the t-shirt game keeps getting better. Traps are being set on the internet to punish AI scrapers and poison datasets. The new Chicago pope bashed AI in his first big speech. Just in my literary corner of the world, anti-AI clauses are becoming standard in book contracts and magazine submission forms. A recent episode of AppleTV’s The Studio ended with a crowd at ComicCon — and Ice Cube — chanting “fuck AI.” Last week there was a WorldCon kerfuffle (sigh) over using ChatGPT in part of the panel selection process.
(My WorldCon take is that, well intentioned though it was, feeding an AI a list of names and asking it to compile dossiers of their scandals and transgressions is a pretty dystopian use-case.)
It’s clear that writers, artists, and others in that orbit increasingly view any amount of engagement with LLMs as a betrayal of creative class solidarity. The sentiment (which I’ve heard all the way from Tumblr teens to Pulitzer Prize winners) seems to be something like this:
I’ve heard pushback about such anti-AI puritanism, the ways it’s just another case of social media mob culture. For me, this is where the Butlerian Jihad continues to be a fruitful metaphor. The Dune books are all about how holy wars and revolutions are not gentle or reasonable, how they can turn ugly, righteousness fueling a fire that can consume nations and worlds.
The other way the metaphor is proving apt is the deep-seated, almost spiritual nature of anti-AI sentiment. It’s not just more Luddism. Many people — though hardly all, given the popularity of AI products — sense that there is something grotesque about these simulacra, the people who push them on us, this whole affair. That aversion to the technological profane holds even when various stated objections to AI are supposedly addressed or nitpicked to death.1
Meanwhile, throughout all this, I have myself felt on the front lines of something like a grand struggle against these likeness machines — not just as a creative but as a teacher. Because what’s become clear over the past year is that the killer app, the median American use case for products like ChatGPT, is cheating on your homework.
There’s been a lot written about this lately — a big article dropped in New York Mag as I was sitting down to type this newsletter, and a discord mutual had a similarly thorough piece a couple weeks ago in the Chronicle of Higher Education. Both pieces get into the increasing AI frustration among teachers and the increasing AI dependency among students.
There was a lot of hope for the value of AI in education (and still is, if university partnerships with tech companies are any measure). An infinitely patient digital tutor that can tackle any question (a la the Primer in Neal Stephenson’s Diamond Age, and probably a hundred other SF references) sounds like just what a strained education system needs — if it didn’t hallucinate constantly, that is. And I know teachers who use it. They’ll have students check with ChatGPT in class to get answers to discussion questions, or encourage its use in revision. Some are no doubt having AI write emails to students and paper feedback, too.
But these articles show that concern is mounting over a few factors. First, there’s a big difference between getting something explained to you, and actual learning. You might feel like you are learning when querying a chatbot, but those intellectual gains are often illusory.
Second, AI severs the connection between an output, like an essay, and the real learning, thinking, and practice creating that output usually requires. There’s now no way to be sure that a student who turns in a good essay actually has a grasp on the material that assignment was supposed to push them toward understanding. Thus, AI lets students skip the “desirable difficulties” that produce real learning. The temptation to skip these difficulties is powerful enough that even very engaged students, students who understand the value of “desirable difficulty,” will use AI for the sake of their GPA, their time, and their stress levels.
This corner cutting doesn’t seem to be confined to core classes students have to slog through on their way to their major. At AWP this spring, I attended a panel on fending off AI in the creative writing classroom. Even students who should be on the side of Batman (above) may turn to AI when they’ve fallen behind and have a workshop story due. Sad, because some of our best thinking and writing and storytelling often happens when racing to make a deadline! The takeaway from the panel was: less focus on the product and more on process.
From my own anecdotal experience teaching English over the last two years, particularly first-year composition classes, I can confirm the in-roads genAI has made with American college students. I’ve seen it happen in real time. My first semester I caught one, very tech-minded student using ChatGPT for an assignment. My second semester I caught a couple more. Last fall, I sent back rhetorical analysis papers from a full quarter of my class for obvious (and erroneous) AI usage.
At this point, it wasn’t just the comp-sci or business majors or the generally disengaged. That quarter last fall included one of my most engaged students, who had ChatGPT analyze, of all things, one of Ted Chiang’s New Yorker essays on AI. Her mistake was forgetting to include the byline when she copy-pasted into ChatGPT, and so the bot helpfully filled in the author as Jonathan Franzen. Most of the time when I catch students using the homework machine, it’s because of “user error” like this. I’ve had students use AI to write event reports, and then turn the reports in before the event actually took place. I’ve had students submit end-of-course reflections in which they talk about projects we didn’t do or gush about how I’d become “not just a teacher but a mentor” when I’d never once seen them at office hours.
Without such user error, it’s getting hard to point to AI prose with any kind of probable cause. Sometimes I spot two assignments using the same not-quite-right phrase or characterization, or the quotes or citations are sus. Otherwise, often I sense that something isn’t quite right, but it isn’t enough to call the student out on. And I’m sure there are cases where I don’t pick up on the AI usage, either because students engaged with the chatbot in a more upstream fashion, or because they used various prompt tricks and prompts to make their text seem more authentic (inserting typos, etc.).
Students are also increasingly aware of this tension. Last fall when I emailed students with suspect papers asking if they used AI (and promising to let them resubmit), they pretty much all fessed up. This past semester I tried the same thing, but those I emailed mostly held firm and denied cheating, knowing, I think, how much of a hassle it would be for me to actually escalate the situation to the level of an academic integrity violation. And it was, so I didn’t.
So a lot of AI work gets past my bullshit filter. The result is that grading and giving feedback — always a chore for teachers since time immemorial — now feels more adversarial and less collaborative. Which I hate; we should all try to banish cop-mindset from our psyches and pedagogies. It’s not that I’m eager to catch my students cheating, but I earnestly think I’m doing them a disservice when I let them let AI do their writing and thinking for them, as though — to borrow a popular metaphor — they were using a forklift at the gym.
There’s a big difference between having ChatGPT compose your emails because you don’t want to do it yourself and having AI compose your emails because you can’t do it yourself.
Folks like Sam Altman have compared ChatGPT to a “calculator for words,” and honestly I don’t think that’s far off (except of course calculators do not make shit up). But the existence of calculators does not mean we want to live in a society where people don’t learn to do basic arithmetic. The same principle should apply here. I want my students to write unassisted because I don’t want to live in a society where people can’t compose a coherent sentence without a bot in the mix.
Plus, engaging earnestly with bot-written text is mentally deadening, and frankly I do resent when I have to read it. There’s just no there there, especially if what you’re looking for is a human you can have a conversation with. It reminds me of Neal Stephenson’s novel Anathem, in which misbehaving monks are forced to study a collection of subtly incoherent texts as a form of punishment. Sifting through a bunch of potentially bot-written likeness essays comes with a certain paranoia lurking over my shoulder. Which feels poisonous for the whole process of teaching and learning.
This past semester I tried to make it harder to use AI in my classes, and hopefully, thereby, reduce the poison. Students were asked to compose their work in Google Docs, so I could see they weren’t copy-pasting big chunks of text in. This turned out to be more trouble than it was worth, as, no matter how much I walked them through it in class, I always had to chase some students down to get access their docs, or untangle weird Canvas integrations, etc. And I’m certain some students were prompting ChatGPT in one window and then hand-typing their essay in the other.
When I first started teaching comp, we were given three options for language to include about AI on our syllabus.
Cited Use: Students were free to query an AI tool and include that language in their assignments, so long as they cited it as one would another source.
Guided Use: Students could use AI as directed by me in the classroom.
Unauthorized Use: Students were asked not to use AI at all, (even though, the language acknowledged, these tools could “help them complete assignments more efficiently”).
For the first year, I went with option #1, figuring it would help me avoid exactly the kind of paranoia described above, and that I could help students learn to avoid the pitfalls that were common in AI writing circa 2023. Exactly zero students cited AI use in their papers. Even when there are licit ways to disclose AI input on their assignments, students prefer to try to pass bot-writing off as their own. Which to my mind means that they believe AI is cheating and turn to these tools specifically to cheat.
All the while my students have been eager to write about and discuss AI, with very little prompting from me. I wrapped up this past semester with a “Writing to Future” project where students tried out futures thinking techniques and produced foresight artifacts contrasting predicted vs. preferred futures. Several of them came up with projects fretting about futures with ubiquitous AI and yearning for futures in which tech use is more moderated than today.
I’ve heard these frustrations over and over again from my students. AI is just a new layer on top of the addictive tech stack of phones and screens and social media and Zoom and online educational platforms they’ve spent their whole lives in. Many of them deeply resent that they never had a choice about all this. They get to college and find that their problems with this stuff don’t go away when they are out on their own; in fact the addictive patterns often get worse without family structure keeping them in check.
These conversations — the pleas from young people caught up by these products and unable to get out — are part of what’s pushed me toward the Butlerian Jihad line of thinking. I think there is a good case to be made for trying to restrict AI use among young people the way we try to restrict smoking, alcohol, gambling, and sex. Those policies are imperfect, but they do steer young people away from behaviors that can disproportionately harm them more than adults and that they don’t yet have the capacity to regulate the way (some) adults can.
There are developmental reasons for such restrictions, and pedagogical ones. But also, it seems like our tech overlords aren’t able to create an LLM “personality” that won’t generate CSAM or engage minors in sexual role play (often using celebrity voices). Which highlights the problem with presenting these technologies not as simply a calculator for words but as a “likeness of the human mind.”
Not that adults are necessarily great at managing the negative cognitive impacts of these technologies. This harrowing article from Rolling Stone about “ChatGPT-induced psychosis” points to a growing mental health crisis as users talking to chatbots fall into existential confusion. Which is exactly what I predicted would happen in my years-old story “The Chaperone”:
Very rarely she’d have customers who owned up to and defended their feelings. “Who are you to say what can feel and what can’t? Trini has evolved. She’s emerged!”
“Emerged.” There was a cottage industry of books and forums that sold these lonely men vocabulary like that. They had a whole mythology. The worst charlatans pitched Jan’s customers the notion that sufficiently complex relationships — the power of love! — would make weak AI phase shift to strong. Jan felt sorry for the men who needed such prophecies. Imagine the aching ego it took to believe your chatbot crush could kick off the singularity.
Meanwhile cheating with AI is not confined to homework. It’s happening in business and law and science. This is not just using AI help with dull writing tasks. It’s engaging with reality based on nonexistent citations and caselaw. It’s choosing convenience over fidelity to the truth — perhaps the slipperiest slope of all.
All this points to a need for a new framework for thinking about and addressing the negative cognitive impacts of these products. I haven’t been able to stop thinking about this comparison I saw on discord:
I also think we might be in a place in 20-30 years where AI is like the laudanum/heroin of the late 19th century; everybody loved it, was instantly addicted, and it was so bad we had to invent new kinds of crime and regulation
For my part, I’m going to try something new in my classroom next fall: pen and paper. I’m going ask students to keep their devices put away and work their ideas onto the page by hand. Students will turn in hand-written freewrites, take notes on paper, mark up printed out readings, and receive line notes in colored ink — all that old school methodology that did successfully educate a number of generations before personal computers came along. I’ll have to learn how to read student handwriting (and improve my own!), but I think it’ll be worth it. Any suggestions you have on how to pull this off are most welcome.
This isn’t just about AI, but the way students are distracted by their screens in general. I know how hard they are to resist — as a grad student I’ve been as guilty as anyone of surfing and emailing and texting during class. This past semester was particularly bad on that front. So many were working on other homework in class, watching sports or tiktok, that the broad discussions I try to cultivate often struggled to get off the ground. (For what it’s worth, I received an award for teaching excellence this semester, so I don’t think it was just me failing to engage them effectively.)
I also I want to give grades more for completion and participation than quality of outputs. We’ll try to get more into the process, and worry less about the product. Banish the cop from my mind and teach as best I can.
It’s odd, because I was always a student who hated writing by hand. With the exception of a few periods living off the grid, I’ve always been happy to do my creative and professional work on laptops. Cut and paste is an essential tool in my writing process. But I’m excited to push myself to try out the analog methods for a while. And I’m hoping that in doing so I can cultivate a classroom that gives my students a respite from the dark patterns they are bombarded with.
AI boosters love to say that AI will change everything, and I think in education they may be right — just not in the way I suspect they hope. Beating the likeness bots and the cheating machines will require us to become more present with each other, more humble and careful in our words and choices, and, most of all, more human. But, as with all our great 21st century challenges, I’m hopeful that on the other side of that struggle, we may find a better world.
Cosmic Mystery Club: Memories + Mirror Mazes
Over at my partner’s newsletter, the Cosmic Mystery Club, CYB is discussing Lorelei and the Laser Eyes, a puzzle video game we played together over Christmas. Not gonna lie, even having played it I don’t think I actually understood the plot until reading C’s breakdown. Anyway, go check it out, and subscribe to the Cosmic Mystery Club.
News, Reviews + Miscellany
As mentioned above, I was given a Teaching Excellence Award from ASU’s Graduate Student Government.
I also took first place in graduate fiction at the 63rd Glendon and Kathryn Swarthout Awards with my story “Any Percent.”
And I found out just yesterday that later this summer I will spent a couple weeks in DC and Louisiana as part of the Carbon Removal Justice Fellowship Program put together by the National Wildlife Federation and the Institute for Responsible Carbon Removal at American University.
I think I linked to this previously, before it was fully cooked, but here’s an interview I did last fall for the blog of ASU’s literary journal Hayden’s Ferry Review.
Art Tour: Turbulent Mountain Waterfall
During a recent visit to the excellent Phoenix Art Museum, along with the Remington piece at the top, I enjoyed seeing this beautiful drip painting by Pat Steir. An image I’m going to hold in my mind as the Arizona heat begins to take hold.
One point I’ve seen in a few places is that objecting to training these models on vast amounts of human creative and intellectual work without authorial permission somehow puts one in the same camp as Disney and other copyright monopolists. And sure, AI training could be construed as fair use (though, maybe not!), and copyright was always a not-great legal tool wielded just as often to crush creativity as to protect it. But that doesn’t mean that the looting and strip mining of our collective words and images and art by wealthy companies isn’t wrong. It’s just wrong in a way the law hasn’t figured out how to define yet.
It's a tipping point/feedback loop situation- it's tricky to get a class out of an adversarial mode into a playful, developmental headspace, but if you can get the forces mustered it takes care of itself- if you can get a set of collective expectations that this is the place where you do real work and people see you do real work, it's contagious and self-policing.
I remember reading a study about taking notes that found that students who took notes by hand had much higher retention of the information than students who took notes on the computer.