AI As A Solved Sudoku Puzzle
Just a short note on the fact that what I see in myself when using AI[1] is similar to solving a Sudoku puzzle vs. seeing a solved Sudoku puzzle: A solved Sudoku is much easier to validate as solving an unsolved puzzle is. Which is why watching Simon or Mark solve a Sudoku puzzle or watching amazing Slay The Spire streamers like Baalorlord is possible without having as much skill as these people have since verifying good play/solving ability is much, much easier than doing it oneself.
So sometimes I think about my AI coding agent as a (statistical) Sudoku solver: It is much easier to look at its solution and say thumbs up or thumbs down.
There is a bunch more to say:
- How does this fact relate to the Illusion of Competence?
- How does it relate to the Mere-Exposure-Effect (being exposed to something more often tends to make us think about it positively, therefore makes us remember it more)?
- How does it relate to the Illusion of Knowledge and is there a (useful) difference between this and the "Illusion of Competence"?
- Dunning–Kruger effect, etc. etc.
Anyway. I think it's useful to recognize that any time you're using an LLM, you're letting it solve metaphorical Sudoku puzzles for you and that means you don't solve them. You might think "I could've solved this", but you really didn't solve it.[2] Does this mean LLMs are complete and utter garbage? I don't know. I have found that I am not that interested in all kinds of (metaphorical) Sudoku, I'm only into some of them. Which means that I am totally fine with not solving all Sudoku puzzles that there would be to solve. Metaphorically speaking. I feel like that very much aligns with my favorite quote on the matter:
"You know what the biggest problem with pushing all-things-AI is? Wrong direction. I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes."
(The author of that Quote Joanna Maciejewska wrote this back on Twitter but is nowadays on Mastodon.)
In my case, I am not that interested in "solving" (learning) all the ins and outs of our Kubernetes-based infrastructure. I know enough to recognize a "solved Sudoku" when I see it in this area, though. And I am learning, despite it all, more and more with every solved problem. This is true in other areas as well. Having an LLM solve some of the Sudokus that you only need to verify feels like it takes a weight of your shoulder to focus on the Sudokus you actually want to solve.
One last point to this surprisingly long "short note": I also think that having a breakthrough and playing with it is very often - or does it just feel like that? - a lot more instructive than not solving something and not knowing how to proceed - although I will concede that struggling for a while and then having a breakthrough feels amazing. Do I need to have that with everything though? I honestly don't know.
For my general attitude towards AI/LLMs see this: Hub "AI and LLMs". It's quite the opposite of being an uncritical person. ↩︎
Or at least not alone. There is no shame in that, IMHO. Nobody is going to claim that the refactoring menu of JetBrains IDEs was a terrible idea. But you didn't do the refactoring by hand, if you've used that menu either. ↩︎
-
← Previous
DailyDogo 1424 🐶 -
Next →
Idea Mining Patterns