Skip to main content
Martin Hähnel

A Critique Of A Critique (Of LLMs And Technology Critique In General)

Kind of interesting critique of (part of) one of Cory Doctorow's posts as regards to LLM haters: Acting ethically in an imperfect world. I was initially interested in it, because Doctorow made an argument against "purity culture"[1]:

Purity culture is such an obvious trap, an artifact of the neoliberal ideology that insists that the solution to all our problems is to shop very carefully, thus reducing all politics to personal consumption choices:

[…]

I mean, it was extraordinarily stupid for the Nazis to refuse Einstein's work because it was "Jewish science," but not merely because antisemitism is stupid. It was also a major self-limiting move because Einstein was right:

[…]

Refusing to run an LLM on your laptop because you don't like Sam Altman is as foolish as refusing to get monoclonal antibodies because James Watson was a racist nutjob[.]

The critique is - I think - trying to say that what maybe appears as purity to some is pointed critique about particulars to others.

The strawman is his claim that people who criticize LLM usage are doing that for some form of absolutist reasons. That they have a fully binary view of the world as separated into “acceptable, pure things” and “garbage”. Which is of course false. [...] He attacks a ridiculous made-up figure to deflect from specific criticism of LLM use (that many probably wouldn’t even apply that strongly to his use case). But that’s not where criticism of LLMs comes from: It’s mostly specific focussing on the material properties of these systems, their production and use.

The critique ends somewhat perplexingly like this:

I do agree with Cory that demanding perfect purity lead nowhere. We are imperfect people in an imperfect world. I just do not think that this means to go all accellerationalist. Just turning the “open source” dial up to 11 does not stop the apocalypse. It’s a lot harder.

So I guess everyone agrees that "purity culture" doesn't make any sense. Me too. And I personally agree with the author against Doctorow that just "liberating tech" (which to me isn't as big a stretch as is implied here, but I'm a Latourian...) can't be the end goal either. However, I am a useless fatalist when it comes to these things and I don't see any realistic chance to his project "an Internet and a world that is more inclusive, fairer, freer" through technology critique (which I presume this post is in service of).

In my humble opinion the point of technology critique is socio-technological theory-practice. It is a pretext to rewrite (and therefore reinvent) the world through changing perceptions. I liked that point from an article I recently linked:

What’s true in the world of fashion is also true in the world of ideas. Being ignorant of the forces shaping society does not exempt you from their influence—it places you at their mercy.

Similarly, a good technology critique can make people reevaluate their world and their relationship to it. Cool. Awesome. In somewhat of a follow-up to the just quoted post I wrote:

[...] I think a lot more people will be at the mercy of the few literate people that are in power or employed by power. I imagine there will be other literate people, too. Some more some less literate, so I agree that not all of us will lose the hunger for learning/reading/writing/thinking/argueing. But these pockets of literacy in an otherwise post-literate world will probably not do a lot to safe democracy/science/humanities. Sure, us few can enjoy books (even novels), but it might not move the needle in public discourse. And I think here is where I do see a darker future than Mastroianni [the author of the article I am discussing here] who seems to be happy with the fact that his small community of readers is literate and that the decline is just moderate (so far).

Similar things could be said about the technology critique hopefuls. Cool if you have community or a readership (that may even appear large to you) that believes in the transformative power of the written word. But it is also kind of an echo chamber and a self-selecting, somewhat privileged circle.

If people's lives hang in the balance - even comparatively well-off people like software developers in the west - for example, if people might lose their jobs, most people will try to preserve themselves and the world they live in. People have houses, kids, pets and hobbies and whatnot.[2] It is basically unthinkable for most people to give up on what they are accustomed to for ethical reasons alone. Sure, sometimes a decision is easier to make if you happen to live under the right circumstances. It's easier to criticize LLM use in professional software development - and refuse to use it yourself - if you yourself are a retired software engineer. Ron Jeffries expressed this recently:

One response to the above [a critique of LLMs], even if one believes it, might well be to feather one’s own nest as best one can, learning to use and guide the “AI”, so as to preserve one’s position as long as possible. I can see that and understand it. It’s not what I’d call morally great, but it’s not horribly corrupt either, in my book. When faced with no good choices, we sensibly choose the least bad.

I am fortunate to be retired and no longer dependent on software to eat. And my best wishes go to those who are, and my honest acceptance of the choices they make in this new situation. I think LLMs are not a good thing for us and we need to find ways to live with them as well as we can.

So, bless you, bless us all, and good luck!

So that probably means that most people will be making amends to using LLMs as coding tools, therefore normalizing it ("If I use it for this, using it for this other thing doesn't really move the needle either..."). Same goes for almost anyone everywhere else you look.

I read The AI Con and was basically thinking "Now What?" after I was finished. Most AI critique reads completely disconnected from this context problem: People may seek out technology critique and maybe even want to change the world they live in in some way, but most won't overcome the gravity of their own lives. That is why I think a pointed critique even if argued logically, does nothing for most people.

The only hope I have is in "many much more smaller steps", as GeePaw Hill would say. I'll contrast something from that critique with an idea I shared yesterday. Here's what the author seems to think is a reasonable approach:

“I know there are many critiques of LLMs, but right now that is the best way for me to enable my work, I try to limit the problematic aspects by using a small open weight model and checking the results in detail.”

And here's is what I said:

"We simply can't just run a "Gas Town" equivalent for every developer. We are still living in the "free money" era of LLMs as coding tools, but it ain't staying that way. Of course, taking the human out of the loop will almost certainly lead to code churn and in the worst of the cases disaster down the road. Therefore the plan should be to build something that can work without the use of LLM agents (like Codex) and with a human in the loop guiding the system. This tends to be more token efficient and is of course (more) deterministic than just simply letting a group of agents with different roles working on the same code at the same time with only light supervision."

This seems to work because it doesn't talk about things like the environment or social problems it talks about token amounts (money) and code quality (money). But less tokens certainly also means less strain on the planet. Doing it indirectly is less of a red cloth.

To be fair: The author is talking about fixing typos and grammar mistakes and not using a coding agent. And if we just talk about typos I guess a small model with open weights might as well work. However, this is very unrealistic for most people's lived experiences that have contact with the AI world. I don't claim to have taken polls on this, but following the discourse it seems to me that most people deal with one of the bigger AI companies and not some locally run open weight model. So what I suggest is to use these models in ways - within our current context - that are less harmful. These are minuscule changes. I don't claim to reinvent peoples worlds or the whole internet, but I do claim that talking about token use in a company that wants their workers to use coding agents is probably a good thing for more than just the economics of the company.

Bringing it back to purity culture vs. pointed critique: Some things appear one way or another depending on context. In its abstract form it's hard to find any merit in purity based argumentation. What appears as purity culture is at least in part contextual. It's contextual in the sense that saying "Sam Altman is a scam artist and habitual liar, but that’s not one of the first 10 to 20 reasons people criticise OpenAI’s products." can appear as a critique on OpenAI without the implication that using OpenAI's products is therefore out of the question. But it can also very much seem to imply this depending on how and in what context somebody is reading such a sentence. It might be a comment under a Mastodon post written by somebody else that changes the frame. Who knows?! So critique's mercurial status is at times at odds with its mission: To open up and reinvent the world through changing perceptions.

Furthermore, people tend to be unable to escape the gravity wells of their own lives. That is a double-edged sword: Some of us few lucky ones who seem to muster the energy to engage in these kinds of issues are living weirdly privileged lives. I don't agree with Doctorow or Tante (the author of the critique), but in a way we are taking part in the same discourse, which is cool and intellectually nourishing. It just doesn't really do anything in the greater world. We live on islands of ignorance if we believe we matter (including Doctorow, I believe). Saying this probably makes me a heretic.

On the other side (edge?) of this sword is the fact that it is supremely contextual if we are privileged enough to actually go with our values and make decisions based on them. Even people who are well-off rarely do this. Because they (we) are so caught up with our worlds that our intellectual selves seldom get to actually decide freely, rationally, "right"™.

What can be done is a small (no, smaller!) step in the right direction. By starting with yourself and your situation in the world you are already living in. If your realize you have lots of room to move, ask yourself why it is so easy for you and not for others. Finding small things in your world to improve is the name of the game. Stop dreaming of shaping emergent properties of systems that in actuality have produced you.

I think Doctorow, Tante and me we all oppose purism, we just all diverge on what follows from this: Doctorow seems to think reappropriating technology is enough, Tante believes in critique and I don't believe in either. I think we are shaped more by society and its structures, the systems that shaped and still shape us. If we are able to change anything it's incremental, minuscule and less grandiose than what these thinkers seem to believe.


  1. If you don't know: I made a similar argument and called the concept Purity Based Argumentation for example in my article Vulnerable Thoughts Around LLMS and generative AI: "In theory I feel that keeping myself morally untouchable and staying "pure" is interesting, but just as I tried to express when I was talking about manifestos and their harsh delineation between good and bad according to a standard they define without outlining the practical steps to make this a reality, I find myself reaching for the same here: Purity is theoretically interesting, but practically life happens elsewhere and so it is more a question of degree, if anything." ↩︎

  2. Not to speak of things and people that take care of outside their family and friends. Etc. pp. ↩︎