Purity Based Argumentation
In The Context Of Manifestos
This probably where this thread started.
I'm all for binding those "How?"s I would have liked to hear more about to a higher order thinking, but I am not so interested to be sorted into good or bad by adhering to either all of the categories or none, without being told what is needed to even make one of these categories a real thing on a planetary scale (is that the goal?) and what is done to make it worth it to try for that.
In The Context Of "AI"/LLMs
In theory I feel that keeping myself morally untouchable and staying "pure" is interesting, but just as I tried to express when I was talking about manifestos and their harsh delineation between good and bad according to a standard they define without outlining the practical steps to make this a reality, I find myself reaching for the same here: Purity is theoretically interesting, but practically life happens elsewhere and so it is more a question of degree, if anything.
This text is not about changing society through political action, though. It is about exploring what could be a way to live within our current situation that neither loses sight of the complexities of life, by proclaiming a set of maxims, nor throws out the baby with the bath water, by being a cynical, egotistical jerk. The former leads to a kind of "purity discourse" that doesn't help any real person. If anything, it may make you feel bad, if you can't live up to the manifesto's demands. And the latter lives in a vacuum where nobody else matters, which is mostly sad and infuriating for anyone with a heart.
A little clarification here:
I do not claim to have all the answers with regard to how to deal with LLMs either, but I do strongly believe that throwing yourself into all aspects of an issue is a great way to learn more about it. I have also advocated before that I think it's a good idea to avoid a purity-based approach to contested topics (you either do everything right, or you're a monster isn't a good approach). I did this even in my last post on LLMs. I think it is fine and necessary to overstep from time to time - within reason.
And elaborated on it here:
It's important to note that my article tried to figure out a framework that - all else being equal - has a sanity based approach to judging the use of a technology/product that exists, right now. Given that me, the individual, can't really change how the current crop of "AI" was made, I can at least find a way to interact with them that makes sense and isn't "purity based".
I think this "anti-purity framework of judgment approach" is still a good idea. Normal people - including you - will use LLMs, sometimes you'll overstep and use them for frivolous things. Within reason, that's fine.
Same post, a little later:
In an unpublished article about LLMs I wrote:
Will performatively writing purity based arguments against LLMs do anything, though? No. But there is an important difference. Being open to the idea that LLMs could be changed ever so slightly to something better could do at least something.
In The Context Of Leaving Microblog
Now, I had written very recently about my interest to not play this "moral purity"-based game anymore (not that I ever really played it, but that is besides the point), in which we proclaim a certain world view or stance as morally superior, avant-garde or whatever and start to judge what's happening in the world. My point with this was and is that we ought to construct and view - or at least make possible to trace - the complex network that makes up the state of any (local) reality in our moment in time. I want less reductionist views (although I freely admit that heuristics and simplifications and abstractions are important actors in a text and are not to be ignored either) and more connective tissue between manifesto-like expressions and details and steps on how to actually scale that for the planet or even manifest it just for my local reality, here and now.
-
← Previous
Where To Go From Here -
Next →
DailyDogo 1431 🐶