Yesterday, SF/F fan/YouTuber Erin Underwood posted a “GenAI open letter” on File770. By this point in the “discourse,” you’d have to be living on the Moon without an Internet connection to not understand just how much the SF/F/H community despises GenAI as an avenue for creative function. However, you’d have to be a moral coward to understand this fact and not meaningfully engage with any of the major reasons for that response.
Unfortunately, Erin Underwood is a moral coward.
I think it’s important to note that before the “letter” proper, Underwood states her reluctance to publish it, stating that she is “so tired of being afraid of our community” because people are quite forward and aggressive in their opposition. This is true and demonstrated in the reactions to Underwood both in the comments and on social media.
However, Underwood seems incapable and unwilling to engage with the opposition on any of their major points. In all 2,600 words of Underwood’s letter, she barely acknowledges the widespread harm these systems have produced and seems more inclined to dismiss those harms as things that happened in the past and things we must get over. Even when commenters raise their moral objections to the technology, Underwood repeatedly tries to steer the conversation back to her “point” or to her bulleted list of real and theoretical use cases looking for “solutions.” It never occurs to her that her “point” requires everyone to abandon their foundational objections to the technology — objections rooted in harms at the scale of entire communities, industries, and the planet.
That refusal to field these objections, to respond to them with some reason either for why they are wrong/oversold or with acknowledgement that they require restitution of some kind, means there is no moral foundation for Underwood’s call for conversation. All the prickly moral problems — and there are many, as Foz Meadows illuminates with many links in their response to Underwood — are simply avoided either by handwaving them away with faint acknowledgement or by avoiding them altogether. There’s no meaningful engagement with one of the largest thefts of intellectual property in human history, the environmental impact of these technologies, their economic effects on entire industries, the intellectual impacts of these tools on kids, adults, information transfer, etc. All of these problems are practical footnotes in Underwood’s post, and it doesn’t seem to occur to her that this might be the reason she receives so much backlash for her views.
Nor does it occur to Underwood that her tendency to repeat pro-AI pablum might raise the ire of her readers because of its role in paving over the moral objections. AI is here to stay. The damage is done, now we must adapt to it. The genie is out of the bottle. That same reasoning applies to meth and fentanyl, yet I can imagine that Underwood’s objections to those chemical compositions would be moral ones. Where is that same moral concern with GenAI? Avoided.
This is the problem with almost all pro-AI positions I have encountered. Even the most “rational” of them do not engage with the problems these tools have created and will create to any degree that shows a respect for the scale of harm being done. Some, like Underwood, might acknowledge that some problem exists, but they won’t, as Underwood doesn’t, speak that harm out in explicit detail, acknowledge the injustice of it (except, perhaps, vaguely or with handwaving), or argue with the same passion for that injustice to be righted. The least “rational” of them, unfortunately, either don’t acknowledge these problems at all or, as Underwood does, try to walk right by them as if they are mere inconveniences and not violations on a scale never before seen.
On Bluesky, I likened this to coming into possession of many bags of candy. These bags were stolen from trick or treaters. Violently. They did not consent to give anyone these bags of candy. A morally honest person would return the candy to attempt to right a wrong (or try to right that harm in some capacity); they’d acknowledge all those kids with black eyes and concussions. A moral coward knows about the kids but still enjoys the candy; they might wave a hand at those black eyes and concussions or they might ignore them entirely. The candy is here to stay. The damage is done to those kids, and now we must adapt to it. The genie of beating kids for their Halloween candy is out of the bottle. Someone on Bluesky pointed out a minor flaw in this analogy: the candy is poisoned. This is an issue with pro-AI positions, too: the failure to grapple with what these technologies are doing *to* us (mentally, emotionally, intellectually).
This is moral cowardice of the highest order. Ta-Nahisi Coates said of moral cowardice that it “requires choice and action. It demands that its adherents repeatedly look away, that they favor the fanciful over the plain, myth over history, the dream over the real.” He was talking about the confederate flag, but that same attitude applies to the many moral issues facing us today (and let’s not forget that the LLMs behind the GenAI systems we’re talking about can and will reproduce racial biases). Refusal to face the problem directly, especially a problem at the scale of GenAI, even when its harms are documented, historically and ongoing, is a moral choice. To be won over by its perceived benefits to our ability to communicate sacrifices a part of our humanity for flashy slop. To argue about its inevitability, the impossibility of extracting it from our lives, is to peddle myths and dreams. It is the choice to wave away theft and harm to creatives, ecological destruction, the decimation of communities to service data centers, the whittling away of the last remnants of truth and the real, and the slow deterioration of the human mind, inarguably our most important feature as a species.
If Underwood had simply said “we need a bit of nuance because there are use cases that we may not know about, such as spellcheckers and so on, which don’t involve generating the whole work and may be difficult (but not impossible) to avoid,” I would likely agree in practice but not in principle. True, this also downplays or ignores the real and continuing damage of such systems, but I agree that there is a difference between using Grammarly to check grammar and using ChatGPT to generate parts of one’s text (as Underwood admits to doing for this post — sorta). it would not require 2,600 words. This point has been made succinctly in social media posts. And it is a point that, frankly, I’m not convinced needs to be made for the same reasons Foz Meadows outlines in their response. Regardless, this point would not require “genie out of the bottle” defenses and other pro-AI pablum, and it certainly doesn’t justify downplaying, handwaving, or ignoring the serious objections raised by opponents.
There is a remarkable irony in ending her post by saying that “people are what matter.” In almost every way that really matters, Underwood’s letter is only a defense of edge cases. It does not defend all of those who have been and will be harmed by these technologies. It is not a defense of me and many others whose work was stolen and who will not be compensated for it. We must get over it. We must accept that this is the new normal. Nor is this letter a defense of the people in all the other circumstances not tied to creative function who have been and will be harmed all so we can have some tool that, admittedly, is kinda neat but (to me) largely useless unless you’re not particularly interested in learning how to do things for yourself. And it is all incredibly frustrating because it constantly feels like those who parrot pro-AI positions (even if they themselves are not hardliners) dismiss and belittle those who are or will be harmed by these tools. Certainly, some of these folks don’t mean to, but the cost of their silence on these issues is monumental.
It wouldn’t surprise me if this post comes across as mean and aggressive. Nobody likes having their morality questioned. But refusing to deal with the moral problems these technologies produce with more than footnotes and vague acknowledgements is simply moral cowardice. If people really do matter, then we cannot be ostriches in this moment. We need to confront just what these technologies are doing to people and at what scale. Otherwise, I fear what the future will bring.
