DailyDogo 525 🐶
DailyDogo 524 🐶
We’ll have an exterminator over on Wednesday for a silver fish and paper fish situation in the building and our apartment. Feeling anxious. Everything has to be cleaned and moved 15-30 cm from the walls. The next three days are going to be stressful.
My Partner is on a business trip in Germany for three weeks. Napu and me are holding the fort at home here in Kuusamo, waiting for mom/E. to come home.
I’m proud of and happy for my partner. She has gotten a great opportunity to widen her horizon and went for it bravely. ❤️
DailyDogo 523 🐶
DailyDogo 522 🐶
DailyDogo 521 🐶
We’re not eating this because it is tasty, we’re eating this because we need space in the fridge!
DailyDogo 520 🐶
Wow. Hug your dogs. And your (other) loved ones. www.youtube.com/watch
DailyDogo 519 🐶
DailyDogo 518 🐶
DailyDogo 517 🐶
This year I may skip buying mlb.tv and will only watch the occasional “free game of the day” to save some money.
I’ll miss Kruk and Kuip, but it’s kinda interesting seeing different commentators work different teams' games.
Late breakfast today. It’s almost 12.
DailyDogo 516 🐶
DailyDogo 515 🐶
Drinking coffee on the balcony, eating easter sweets sent from Germany by my mom. 😊
DailyDogo 514 🐶
DailyDogo 513 🐶
DailyDogo 512 🐶
There is no real reason for this, but I feel anxious somehow. Doesn’t happen that often. Thank god.
DailyDogo 511 🐶
LLMs Usefulness As A Tool Is Not What Makes Them Morally Ambigous
What I do get: I get that LLMs are not real AI and fantasizing about LLMs being sentient is foolish. LLMs are also morally ambiguous technology, no question. However, there is a certain skeptical bend towards the usefulness of LLMs that rubs me the wrong way, especially when it gets mixed up with the - absolutely important - issue of moral questions concerning the training of models using copyrighted materials.
Matt Gemmel’s post “Autorship” seems to me to be an example of this. He takes exception to the fact that things that have been created using LLMs are “automated plagiarism” and people using words like “created” as regards to using LLMs ought to know that they in fact have not created anything, really, and that they are lying if they claim otherwise.
I don’t know. Matt does not differentiate between the concept of LLMs - which very well could be imagined as a morally sound technology (excluding the horrible climate impact of LLMs, for the purposes of the argument…) - and the actual flawed instances of LLMs in the wild, right now. I believe that appropriate legislation would indeed slow the “AI revolution” down quite a bit, but I also think that we have passed a threshold, which makes it imperative to imagine an ethical version of LLMs (et. al.) and advocate for it.
Therefore I would like to maybe suggest that a ChatGPT-like LLM assistant can indeed be super useful for all kinds of tasks of knowledge workers. It is a mighty tool indeed. A mighty tool means taking on responsibility, too: I ought to be knowing what it is I’m doing. This is true for simple tools like a hammer, and increasingly more complex ones like a chainsaw or - changing categories here - a printing press, a personal computer and so on. LLM assistants are no different. Also it takes some skill to work with such an assistant to make it do what I want it to do. All of this is on the “this-wordly” (is that the right word?) side of tool use. It seems natural that tools would assist and extend our capabilities and that LLMs aren’t an exception to that.
When talking about LLMs as tools we can make an argument about craftsmanship, I guess. Using simple tools by a skilled person can indeed feel great for that skilled person (from what I hear and read). “I feel a connection to the wood” and so on. A carpenter does not use the same tools as a furniture factory. But does this mean a furniture factory is not creating furniture? I’m also not so sure that “carpenter”/“factory” is to “furniture” what “knowledge worker without LLMs”/“knowledge worker with LLMs” is to “content” - I mean this in the sense of “using an LLM is maybe more like using a veneer press than an IKEA factory to create furniture”.
(I also feel it is very important to point out that engaging with the material in front of me actually makes all the difference and not what tools I use (they can enhance my engagement, but everything hinges on my engagement). In that sense it is related to my recent post about using PKM systems and if they are needed or not. But my argument here is not really about this…)
In short: Separating the (pressing!) moral questions from LLMs as tools and their use seems important, because in order to make a real balanced argument for or against LLMs, I need to argue from a position that acknowledges the usefulness felt by its users (and that are, if you ask me, real). We can and should (must, really…) also talk about copyright issues and climate impact and whatever else is questionable about LLMs, but a holier-than-thou position won’t lead anywhere progressive either, in fact instead of talking about these issues I spent all of this post arguing against the blending of these separate points instead.
DailyDogo 510 🐶