So Yesterday, shortly before bed, I got angry about what somebody else wrote on the internet.
[@manton](https://micro.blog/manton) This is such a bad take. Why do this? You make me as a mb user look bad by association. I get that you're trying to say that it was wrong and trying to say that people are emotional about this. But what you're also saying - inadvertently perhaps - is that Altman (et. al.) didn't do it on purpose, because he's obsessed with the movie Her and therefore people should have not freaked out or, if anything, freaked out earlier since that voice was already out there for a while?
And what does it even mean to say that their account is not a total lie? Does it need to be a total lie?
Why protect a company and a CEO like this? I myself am using AI here and there and I think it is an interesting technology and probably here to stay (for better or worse), but there is no need for "both-side-sing" it here.
This was a reply to a post by Manton - the person behind my blog hosting service Microblog:
When your company becomes the enemy, all that matters to people is what feels true. OpenAI’s Sky voice shipped months ago, not last week. We hear what we want to hear. OpenAI mishandled this, no question, but most likely Her is ingrained in Sam’s head vs. intentionally ripping off Scarlett.
I had innocently scrolled through my Mastodon feed and saw a couple of posts scroll by:
I am not above admitting that I got angry, in part, because other people got angry. Social media is a seductive medium. I was tired and it was easy to fire off a reply like the one I did. It felt righteous.
Now, I had written very recently about my interest to not play this “moral purity”-based game anymore (not that I ever really played it, but that is besides the point), in which we proclaim a certain world view or stance as morally superior, avant-garde or whatever and start to judge what’s happening in the world. My point with this was and is that we ought to construct and view - or at least make possible to trace - the complex network that makes up the state of any (local) reality in our moment in time. I want less reductionist views (although I freely admit that heuristics and simplifications and abstractions are important actors in a text and are not to be ignored either) and more connective tissue between manifesto-like expressions and details and steps on how to actually scale that for the planet or even manifest it just for my local reality, here and now. I am a skeptical person and am always a little suspicious, if people proclaim stuff like “Just don’t use AI!”, “Just don’t fly!”, etc. because all this doesn’t make sense if for example - I speak for myself - AI is helping me paper over gaps in my programming knowledge. I am dependent on the job I have so I better have some understanding of how to use this technology. When my boss says “your estimate is too high, ask ChatGPT how to do this and you’ll be quicker…” I of course have a thousand counter arguments in my mind but in the end I am not studying philosophy, management theory, or whatever. I am employed as a programmer and am expected to do my work in a way that is (seemingly) most efficient. And I have to admit that AI is actually helping me, too. I think. It’s in any case an interesting technology that I should know how to use if I am to engage in meaningful discourse about it. So much so, that I am using these tools also in my free time here and there. I also happen to have family in Germany and am living in Finland. Therefore I will fly more often than others (at least twice a year, probably) to see my family members.
So I think I am allowed to say that I get it. I get the need and the want to express something more complex than “Good is defined as X, you’re not X therefore you’re not good.”
However, taking on more complexity when doing a take, does still not excuse you from also recognizing power structures. Many people are outraged over OpenAI’s handling of a voice very similar to one that sounds like Scarlett Johansson’s. I am not very hopeful that an investigation would ever conclusively show that OpenAI just cloned her voice or didn’t employ a voice actor that sounds similar to her because that person had a voice similar to Johansson’s. But it seems questionable to not assume some form recklessness or even ill-intent here.
People who have the bandwidth for it are worried and annoyed by tech bros and Big Tech. And often for good reasons. It’s not like there is no evidence. They feel violated by them being invasive and exploitative. This is where most manifestos in this space come from, I bet. So a case like this - which to most people who can stomach engaging with this part of the AI hype is just another in a growing list - is to be taken seriously as a prime example of what people despise about companies behind AI.
I am angry about posts like the one I replied to, because it gives more complex, maybe more subtle view points a bad reputation. I do not believe that more complex takes are created equal. But it can look that way sometimes, which is why complexity has become a watch word and moral purity so attractive. “Both-side-sing” is a terrible excuse for a careful, nuanced take, though.