What Is It If It Isn't What It is
I think this is a very badly argued post about an interesting aspect of the discourse around LLMs.
"it is what it is." [...] No. Fuck no. No it is not. You don't get off that easily. [...] We are making this choice. But really, that means you have already decided for me. And I curse you and the ground you walk on for it. No, I'm not joking or exaggerating. Burn in hell. [...] Whether I'll be able to swallow it and accept (as I do for Amazon and Android, after long resisting them) in order to find a job, I don't know. [...] As I learn how to work with AI coding agents, know that I will be thinking ill of you the entire time. Not because I don't get to write for loops, but because you have made yet another part of the economy impossible to engage with ethically. This is how societies die. I wish I were being hyperbolic. I really really do. But I have nothing left but contempt for "it is what it is." [...] No it isn't. It only is what it is because you're OK with what it is, and aren't putting in the work to make it otherwise. Those that don't give a fuck about fair copyright application, about poor people, about our planet, are putting in the work to make it so. And you're letting them. [...] I do not forgive you.
It's the juxtaposition of "Saying that »it is what it is« is just an excuse to behave unethically", i.e. basically saying "we can decide against AI - if we all want it" and then saying "I will have to learn to work with AI to get a job in IT going forward". I suspect we'll see more things like this, because people need to eat, people need to feed their kids and people with years and decades of experience in the industry can't simply get a job in other industries (and in all likelihood wouldn't get the salaries that they are accustomed to that pay for the houses these people live in and so forth).
I personally see that actively and critically engaging with LLMs is the realistic ethical stance in this time.[1] This article I am talking about implies a certain inevitability that literally anyone can claim: You made me do it. If the writer of this post can claim the same for himself, so can we. Somebody else made us do it. But if that's the case, if it's the case that we are all kind of forced to engage with this technology, then we got to imagine a way forward in which we all understand not only the social and environmental costs of these technologies, but also how to make them better by incorporating them into an imagined democratic future.
Collective action problems are not solved by reminding people of the paradox of collective action, but by winning small arguments one at a time in an excruciatingly slow manner (and by changing people's frame of reference). This is the work. So you got a choice: Using coding agents through gritted teeth wishing that people go to hell (and meaning it)[2], or: taking up the work that is to be done.
I probably said this most clearly in my article Those Recent Puzzmo Articles about Claude Code: "My unpublished article I quote here was supposed to be called "We Need People With Hearts Working on AI". And that means on all sides of the matter. Inside companies using "AI" (like mine and probably yours) and by workers inside companies like Anthropic, too. This is how I envision we actually change a technology/product and its influence on society for the better: From the "inside" and gradually." ↩︎
WTF ↩︎
-
← Previous
DailyDogo 1529 🐶 -
Next →
DailyDogo 1530 🐶