LLMs Usefulness As A Tool Is Not What Makes Them Morally Ambigous

What I do get: I get that LLMs are not real AI and fantasizing about LLMs being sentient is foolish. LLMs are also morally ambiguous technology, no question. However, there is a certain skeptical bend towards the usefulness of LLMs that rubs me the wrong way, especially when it gets mixed up with the - absolutely important - issue of moral questions concerning the training of models using copyrighted materials.

Matt Gemmel’s post “Autorship” seems to me to be an example of this. He takes exception to the fact that things that have been created using LLMs are “automated plagiarism” and people using words like “created” as regards to using LLMs ought to know that they in fact have not created anything, really, and that they are lying if they claim otherwise.

I don’t know. Matt does not differentiate between the concept of LLMs - which very well could be imagined as a morally sound technology (excluding the horrible climate impact of LLMs, for the purposes of the argument…) - and the actual flawed instances of LLMs in the wild, right now. I believe that appropriate legislation would indeed slow the “AI revolution” down quite a bit, but I also think that we have passed a threshold, which makes it imperative to imagine an ethical version of LLMs (et. al.) and advocate for it.

Therefore I would like to maybe suggest that a ChatGPT-like LLM assistant can indeed be super useful for all kinds of tasks of knowledge workers. It is a mighty tool indeed. A mighty tool means taking on responsibility, too: I ought to be knowing what it is I’m doing. This is true for simple tools like a hammer, and increasingly more complex ones like a chainsaw or - changing categories here - a printing press, a personal computer and so on. LLM assistants are no different. Also it takes some skill to work with such an assistant to make it do what I want it to do. All of this is on the “this-wordly” (is that the right word?) side of tool use. It seems natural that tools would assist and extend our capabilities and that LLMs aren’t an exception to that.

When talking about LLMs as tools we can make an argument about craftsmanship, I guess. Using simple tools by a skilled person can indeed feel great for that skilled person (from what I hear and read). “I feel a connection to the wood” and so on. A carpenter does not use the same tools as a furniture factory. But does this mean a furniture factory is not creating furniture? I’m also not so sure that “carpenter”/“factory” is to “furniture” what “knowledge worker without LLMs”/“knowledge worker with LLMs” is to “content” - I mean this in the sense of “using an LLM is maybe more like using a veneer press than an IKEA factory to create furniture”.

(I also feel it is very important to point out that engaging with the material in front of me actually makes all the difference and not what tools I use (they can enhance my engagement, but everything hinges on my engagement). In that sense it is related to my recent post about using PKM systems and if they are needed or not. But my argument here is not really about this…)

In short: Separating the (pressing!) moral questions from LLMs as tools and their use seems important, because in order to make a real balanced argument for or against LLMs, I need to argue from a position that acknowledges the usefulness felt by its users (and that are, if you ask me, real). We can and should (must, really…) also talk about copyright issues and climate impact and whatever else is questionable about LLMs, but a holier-than-thou position won’t lead anywhere progressive either, in fact instead of talking about these issues I spent all of this post arguing against the blending of these separate points instead.

Contents