There is no real reason for this, but I feel anxious somehow. Doesn’t happen that often. Thank god.

LLMs Usefulness As A Tool Is Not What Makes Them Morally Ambigous

What I do get: I get that LLMs are not real AI and fantasizing about LLMs being sentient is foolish. LLMs are also morally ambiguous technology, no question. However, there is a certain skeptical bend towards the usefulness of LLMs that rubs me the wrong way, especially when it gets mixed up with the - absolutely important - issue of moral questions concerning the training of models using copyrighted materials.

Matt Gemmel’s post “Autorship” seems to me to be an example of this. He takes exception to the fact that things that have been created using LLMs are “automated plagiarism” and people using words like “created” as regards to using LLMs ought to know that they in fact have not created anything, really, and that they are lying if they claim otherwise.

I don’t know. Matt does not differentiate between the concept of LLMs - which very well could be imagined as a morally sound technology (excluding the horrible climate impact of LLMs, for the purposes of the argument…) - and the actual flawed instances of LLMs in the wild, right now. I believe that appropriate legislation would indeed slow the “AI revolution” down quite a bit, but I also think that we have passed a threshold, which makes it imperative to imagine an ethical version of LLMs (et. al.) and advocate for it.

Therefore I would like to maybe suggest that a ChatGPT-like LLM assistant can indeed be super useful for all kinds of tasks of knowledge workers. It is a mighty tool indeed. A mighty tool means taking on responsibility, too: I ought to be knowing what it is I’m doing. This is true for simple tools like a hammer, and increasingly more complex ones like a chainsaw or - changing categories here - a printing press, a personal computer and so on. LLM assistants are no different. Also it takes some skill to work with such an assistant to make it do what I want it to do. All of this is on the “this-wordly” (is that the right word?) side of tool use. It seems natural that tools would assist and extend our capabilities and that LLMs aren’t an exception to that.

When talking about LLMs as tools we can make an argument about craftsmanship, I guess. Using simple tools by a skilled person can indeed feel great for that skilled person (from what I hear and read). “I feel a connection to the wood” and so on. A carpenter does not use the same tools as a furniture factory. But does this mean a furniture factory is not creating furniture? I’m also not so sure that “carpenter”/“factory” is to “furniture” what “knowledge worker without LLMs”/“knowledge worker with LLMs” is to “content” - I mean this in the sense of “using an LLM is maybe more like using a veneer press than an IKEA factory to create furniture”.

(I also feel it is very important to point out that engaging with the material in front of me actually makes all the difference and not what tools I use (they can enhance my engagement, but everything hinges on my engagement). In that sense it is related to my recent post about using PKM systems and if they are needed or not. But my argument here is not really about this…)

In short: Separating the (pressing!) moral questions from LLMs as tools and their use seems important, because in order to make a real balanced argument for or against LLMs, I need to argue from a position that acknowledges the usefulness felt by its users (and that are, if you ask me, real). We can and should (must, really…) also talk about copyright issues and climate impact and whatever else is questionable about LLMs, but a holier-than-thou position won’t lead anywhere progressive either, in fact instead of talking about these issues I spent all of this post arguing against the blending of these separate points instead.

The style is different, fewer Vogue cover stories[…]

Wow. This is a Finish professor of world politics commenting on the outgoing prime minister Sanna Marin (the question was about the different styles of leadership).

P.S.: I acknowledge the fact that it was very early in the morning and the prof is not a native speaker, but: casual sexism much?! And how is it that the national broadcaster Yle just lets this sit there, uncommented, but edited for clarity when compared to the podcast episode?

Uff. Finland just became way more right wing. Sanna Marin was not re-elected. Her social democrats finished in third place. In front of them: A moderate and an extremist right wing party. 🤬

EDIT: And the extremist right-wing Finns party was the one most voted for in my area (greater Oulu region)…

Tools Make Knowledge

HeyScottyJ - No PKM necessary

The concept of Personal Knowledge Management (PKM) is flawed in that it fails to recognize that there is more to knowledge management than simply collecting, storing, and organizing data. Rather, knowledge management is a process of transforming data into information and then into knowledge through the application of cognitive processes.

While PKM tools can be helpful in collecting, organizing, and connecting the information you gather, it is up to you to do something with that information to turn it into knowledge.

I agree with Eric’s point - and have made a similar one recently - just gathering and organizing information is not knowledge.

However, I feel that Scotty seems to throw out the baby with the bath water: Just because PKM tools are limited doesn’t mean that they are useless. Many people (like me) want or indeed need a writing surface to think. Finding a good writing surface makes a real difference and PKM tools can be an incredible writing surface.

Does this make them “necessary” for PKM? First, what actually is a PKM tool? A notebook? A piece of paper? An index card? A word document? The current crop of fancy note taking tools like Obsidian et. al. are mostly more sophisticated versions of simpler PKM tools that came before them. So I’d say if we just talk about “tools that help turn data into information and in turn into knowledge” (obviously not on their own, I’ll have to engage with the stuff in front of me) then, I’d say, they are indeed needed. If it is about the more sophisticated variants that have become popular over the last few years, then I think the question can be answered with a “no, but”. The “but” part here would make an argument about convenience but also elaborative power.

It can be very convenient to use a modern tool like Obsidian and although not strictly neccessary, working with it is just very nice. As is using an iPhone (in my experience) or a sharp knife. Having a nice tool, makes me want to use the tool more, therefore engaging me with what I have gathered before, which in turn makes it more likely that I’ll acquire knowledge in the process.

More sophisticated PKM tools have, generally speaking, more elaborative power. If I can view and connect an idea in many different ways with ease, I am engaging with the information at hand, making it more easily retrievable. In other words these actions make it more likely I learn how it fits with other things I know already. This is what learning is: trying to understand, trying to get at the ideas behind the information and finding a “fit” within the greater universe of concepts and ideas I already know. If I have a tool, that makes the grunt work easier - like creating links, making my notes portable and so on - I can actually focus more on engaging with the interesting things I have collected. That’s a good thing.

I’m also not sure that the PKM concept itself is not recognizing the importance of actually engaging with my data, as is claimed. I believe that people are aware. I for example was not surprised that PKM should do more than “just” organize data (but was surprised that Scott would think knowledge workers that have heard of PKM aren’t in the know) . I still think that there is a good point in here, that got buried though: No matter the sophistication of the tools used to engage with gathered information: I’ll have to do the leg work, which is what Eric took from it (and what I agree with).

In short: Sure, knowledge work doesn’t need sophisticated PKM tools, but it is tools that make knowledge, so I find it a little reductive to call a concept “flawed” and tools associated with it “unneccessary”, especially if it has lots of things to recommend itself.

P.S.: I would be on board with calling hype cycles around new technologies like “more sophsticated PKM tools” (or “LLMs” more recently) annoying, but these tools (both of them, actually) are not useless and shouldn’t therefore be so easily dismissed.

Actually impressed by ChatGPT bot for BuyItForLife. This is a GPT-like(?) bot that has been trained on reddit’s “Buy it for life” subreddit for tools and gear that are high quality and long lasting. I’m no expert, and this surely has a US bend, but pretty good results overall!

Finished working late today but only because I interrupted working for a bit so that my partner and I could go out for an early dinner to a nearby restaurant! 🤩 (Napu had to wait for us at home. It’s good training against separation anxiety).

I very innocently tried to diagram my current gtd-like workflow in Mindnode and after a couple of hours(!) realized that this thing is so complex that it doesn’t fit into a tree-like structure anymore… 🙈

Does anybody know about environmentally conscious work done in the world of “AI”?

I feel like I read a lot about “it’s expensive and hardware needs to be more readily available”, but is there work done, to make these things more sustainable?

It’s part of critiques for sure, but otherwise absent.

Newsletter writers: Apart from an “unsubscribe” link, please also offer a “change email” link.

Seems like an easy thing, but is very seldomly found.

Turning the clocks back has so far destroyed me three mornings in a row.