#WeblogPoMo2024 - Vulnerable Thoughts Around LLMS and generative AI

It makes me extremely uncomfortable to think in terms of ethics when it comes to generative AI. I would like to say that I am agreeing with Baldur Bjarnason on the matter and that there is actually nothing to discuss, but only to state that AI is unethical, unsustainable, un-researched and actively harmful to the planet and its inhabitants. It’s also a bubble and can’t deliver on its promise:

Found via a Reddit post about a WSJ article quoting a Sequioa presentation

In a presentation earlier this month, the venture-capital firm Sequoia estimated that the AI industry spent $50 billion on the Nvidia chips used to train advanced AI models last year, but brought in only $3 billion in revenue. This 17x number is just for chips – Nvidia chips alone, I think – so the actual cost-to-revenue multiplier is much higher in reality. So the hardware it’s installed in and the actual CPUs are extra. Research is extra. The army of freelancers used for RLHF training are extra. Electricity cost is extra. And chips depreciate in value pretty rapidly. Especially since every chip vendor on the planet has more specialised ML chips in the pipeline that are more effective at the task. This investment will be worthless pretty quickly.

But then … I am using it at work - where my employers pay for a pro version for us to use - and even in my free time I use it as my access to a more advanced version makes it more interesting to use and I’m - as I am so often - caught up in a kind of fatalist argument, it seems: I do not see LLMs going away. Do I feel better for not using them? Only theoretically. I feel like I am learning things while I use them because there are actually wast swaths of what I am theoretically supposed to to be able to do at work that I can’t do without a nudge here and there. Same is true in recreational programming.

In theory I feel that keeping myself morally untouchable and staying “pure” is interesting, but just as I tried to express when I was talking about manifestos and their harsh delineation between good and bad according to a standard they define without outlining the practical steps to make this a reality, I find myself reaching for the same here: Purity is theoretically interesting, but practically life happens elsewhere and so it is more a question of degree, if anything.

I can’t and won’t deny the fact that I find these AI tools helpful and interesting, sometimes. I won’t not agree either that they are also not unproblematic. However, I will say that excluding these things from your life - by individual consumer choice - doesn’t do anything to make them do less harm, make them more sustainable or actually changing the practicalities of life. We have become very good at defining things in such an “either/or” way that it has become useless to apply these standards to any real life situation, where you may be forced, coerced or seduced into - for example - using these things. What now? Time to stop living? Time to apologize for the rest of your life? I think that we may have to relearn how to examine the world. Not in terms of purity of our actions, but in terms of the realization of the world we inhabit.

This means manny big and small things. One small thing it means is to understand the relative limit any one’s actions have on the whole. We are an expression of the whole, so not being able to change the whole is not THAT surprising, I’d say. This also means that better understanding the whole - for example by examining the insane amounts of money and resources that are put into training, developing and serving these models to customers (like in the quote above) and how insanely powerless we as individuals are to change this - helps understanding us and our place int this world. Look, even in Europe - which I would only call a beacon of democracy by comparison to the alternatives out there - I do not foresee a sufficiently strict policy that would make LLMs impossible to deploy even though they are, in their current state, unethical in so many ways.

But if this is the case … I think you ought to be able to examine what you’re dealing with. I think people found always interesting what is problematic in one way or another. As far as I can tell LLMs and the whole field of commercial AI are no different. Does this give you card blanche to not care about any and all concerns around this? No. Does this mean you can’t use those tools? I don’t know, I tend to think the answer is no here, too.

There are things that are tabus in society that you definitely, positively cannot find good and explore in the way I try to argue for here. You can’t try on “slavery” or “naziism” for size, for example. But LLMs are not a tabu. They are problematic, sure, but they are not the same thing as those societal tabus. They may become one, we’ll see. It follows that you may be interested in them as long as you stay mindful and open to what they may become. I would even say that it is important to stay engaged, because this also makes it possible to recognize what may be worth developing further.

I guess what I’m trying to argue for is: 1. LLMs are going to stick around. They may not fulfill their promise of becoming super intelligent conscious agents - which is pretty unlikely and in any case prohibitively expensive and ruinous to our planet - but they are here to stay in some form or fashion. 2. Being interested in and using LLMs - even in your day to day life - won’t change this, however it well give you a better idea of what LLMs are and what they can and can’t do in an experiential sort of way that will absolutely change your perception of them. It’s not wrong to be curious as long as you’re cautious and recognize that you can’t generalize your experience: You’re not doing publishable research, you’re finding out for yourself. 3. I find it MUCH more valuable to live in the uncomfortable truth that you as an “unpure” individual can only do so much, apart from being interested in what is going on and actually recognize and examine what makes existence so uncomfortable if you can’t change what’s happening, which is kind of contradictoriy. I want people to express this ambivalence and live in it, because most of us are simply unable to live purely for purity’s sake.

Contents