Some Clarifications On Yesterday's Critique Of Critique
[I'll get back to more down to earth topics soon, I swear.]
I am still thinking about yesterday's post. I feel a little off after having written it, because it trades in some optimism for fatalism realism as regards to what theory can and can't do. It seems a little strange - in the sense that I feel that I stand out in a precarious way, as not many seem to share my sentiment - that this seems to be the bridge I have to go over, if I want to stay grounded. As I am interested in what can be done better within what we have (an is-focused approach) as opposed to the grand but ultimately toothless pursuit of what should be (an ought-focused approach), I seem to have to state clearly that I do not really believe in big and (possibly) sudden changes on any but especially a grand scale. I guess that makes me an incrementalist?
The article I talked about yesterday had somewhat of an aftermath, the one post that stood out to me from this was a short thread by Prof. Emily Bender (which also co-wrote The AI Con). Here's one post out of the thread:
It was particularly disappointing to see Doctorow misconstrue (and thus, if he is believed) undermine the work that many of us are doing to shine a light on the ways in which the ideology of "AI" and the specific ways in which LLMs and other "AI" products are created do real harm.[...]
I think there is some truth to that in the sense that Doctorow is a public figure with a large following and in the circles of technology writing and socio-technical research I can see that saying something like "I don't believe in what AI's critics have to sell and their purity culture." is somewhat problematic as it might lend a hand to dismissals of valuable hard-won knowledge. However, that means that my stance "I don't believe in purity based argumentation and in grand-transformational theory practice" as regards to AI criticism has also to answer to that as well.
I think that maybe surprisingly I welcome and appreciate and support the socio-technological work that makes visible all the invisible actors that make up "AI" or even "LLMs". All the harms, all the marketing BS all the inherent problems of this technology. I think this work is important no matter where you fall on the is-focused/out-focused divide. It is important because it problematizes not only what "AI" is, but also what can be done about it, realistically. I came away from the "AI Con" book with a feeling of "you, as an individual can't do a lot". You can witness what's happening and form an informed opinion, but in the end this is a question of regulation and reigning in Big Tech.
So personally, I would not claim that this ground work of for example debunking news stories, spreading research, write in an accessible way about the technology to make it more understandable to the public is not a very good and important thing. It totally is. Full stop.
Where I diverge is in the scope of what such work can do in the world. To me, what follows from Bender's (et. al.) talks, books, podcasts, etc. is not a planet-wide rejection of "AI", that is not something that theory (or academic writing) can feasibly do. Instead, it maybe helps to understand the fucked-upness of the situation and that any pointers theory can give is in the place beyond its acceptance as another wart that is now part of what makes society tick.
One more thing: I said in the end of my post that we are basically incapable of doing anything, because we are made by society. I'm tapping into a theory tradition that sees individual perceptions of agency as artifacts of their limited perspective. This perspective has had a few different interpretations, but it generally prefers structure and historicity over - especially essentialist - arguments of agency. I am of the opinion that Latour got it right, that we are our networks and prescribing any agency that can't be located within this network of actors is akin to invoking magic (not his exact words of course). So agency is not just a given (and therefore invoking people's agency to change the world is not a satisfactory theoretical position). That's what I meant.[1]
Nobody asked me to clarify, but I thought it's worth doing. Since I am invoking Latour it is important to point out that Latour tries really hard to move away from structure and any kind of macro level itself. But nonetheless: Agency is an effect or result, not the starting point. ↩︎
-
← Previous
DailyDogo 1547 🐶 -
Next →
DailyDogo 1548 🐶