DailyDogo 1009 🐶
DailyDogo 1008 🐶
DailyDogo 1007 🐶
DailyDogo 1006 🐶
DailyDogo 1005 🐶
DailyDogo 1004 🐶
DailyDogo 1003 🐶
DailyDogo 1002 🐶
DailyDogo 1001 🐶
That 1000th pic in my “Daily Dogo” series marks by sheer coincidence also the third year that Napu is on this planet.
She has not been with us from day zero, of course, that day will come in October. Also, I according to wolfram alpha, I have dropped 23 days somewhere along the way:
I guess I don’t mind it that much and I’m not going to fix it either. Especially in the beginning when doing things manually every day or when traveling and having to repair the streak things got at times untidy. But whatever.
Not all pics are great, but I am immensely proud to have stuck to publishing these for all this time.
A big thank you to anybody who liked these posts on Mastodon or MB!
And even bigger thank you to my partner E. who provided a good amount of the best pictures when I didn’t have any good photos (or sometimes any at all) of our shared little chaos agent.
If you’re curious how the process works: I use shortcuts and an app called Humboldt by Maurice Parker (@vincode on MB) to publish these posts semi-automatically.
Here are some screenshots:
This one is built out of two other shortcuts:
This one converts and then uploads an image of my photos app to MB.
This one looks at the rss feed of the daily dogo category and does some string manipulation to get the next number for the daily dogo.
So there you have it. Not very fancy. Not doing alt texts for these pains me at times, but I am pretty sure that I probably would have stopped the project when I had a tough time otherwise. It really is thanks to the shortcut that I managed to keep this up for so long.
DailyDogo 1000 🐶
DailyDogo 999 🐶
DailyDogo 998 🐶
DailyDogo 997 🐶
DailyDogo 996 🐶
DailyDogo 995 🐶
DailyDogo 994 🐶
DailyDogo 993 🐶
The Way We Use LLMs Makes All The Difference
I am not using a lot of AI stuff. I do, however, have access to and use GitHub Copilot and ChatGPT since we are supposed to be using them at work. I am also totally aware of the problems (ethical, environmental, social, economical, …) that LLMs pose, thanks to Mastodon and my feed reader.
So, being exposed to lots of AI stuff in practice and lots of critical (and sometimes not so critical) thought about using LLMs, I have observed a certain clarifying trend in my thinking about AI.
But before we come to that, I have to frame my thoughts correctly.
Living virtuously
If I could decide, I would live virtuously, or rather, I would like to live in a world where using LLMs is a neutral-positive thing. However, if we think about it, nothing is neutral or positive, and everything has trade-offs. Everything has a cost.
Owning a car has practical benefits but certainly has an impact on all kinds of things around us. Choosing to fly to see family has an impact. Living a privileged life in a Nordic country. Owning a dog. Wishing to own a freestanding house with a yard. Wanting to own a cottage, too. Wanting to travel. Wanting to refresh your hardware. Wishing to update our furniture. Wanting kids.
Some of these things are socially acceptable wants/behaviors/possessions. Some of them have become more questionable in recent times. But I think it’s important to recognize that a) none of these things are climate-neutral, without privilege and above ethical scrutiny, and b) none of these things (or other, equivalent things) are likely to be not wanted by at least some people, and we as citizens of a (western, democratic) society mostly make our peace with that, because changing attitudes towards these things is hard.
And as much as we may recognize the costs associated with these things, we may want to have them ourselves anyways, too. So how could we criticize people for wanting them? Furthermore, we only have one life to live, and we are, in the end, not really able to transcend any and all societal wants and needs. Some we can, some we can’t. Some will feel like natural, biological needs and wants, and we will feel justified in seeking fulfillment of those.
And then there are others - mostly those we are able to let go - those will feel like optional luxuries we can choose to adopt/buy/whatever. And we may get mad at others for wanting them much more than us, even though we don’t need them. Don’t get me wrong: There will always be discourse around things like this. And things and values change, obviously, but there is another category of wants and needs that are socially unacceptable pretty much across the board: And these are taboos. Most things aren’t taboos, though.
And LLMs aren’t either.
LLMs from my viewpoint
With that in mind, LLMs are not that different from other socially acceptable things out there. They are, however, a relatively new product/service category. They are not a new technology, just a new product (EDIT: However, I do use the shorthand of technology to refer to them in what follows). As is often the case with new “technologies” like this, We tend to ask of these technologies and of each other: what is an appropriate amount and type of use, and are they, after all, a net positive for society and me? I have two answers.
For me, they are a positive in certain contexts. These contexts are those where what is generated can be easily validated. For me, so far, all of these contexts pertain to programming.
LLMs help to paper over certain gaps in my knowledge, and they help with the busywork of programming by reducing the amount of boilerplate code I have to write by hand. If the code doesn’t run, the generated code was wrong, and it’s relatively easy to figure out what was wrong.
AI can be a great rubber duck and boilerplate generator. In sum, programming with LLMs forms a great, tight feedback loop that makes coding more enjoyable and makes me feel more productive most of the time.
I very rarely use it for anything not programming-related. I experimented with using it for ideation - my notes system is MUCH better at making me think interesting thoughts - and as a writing coach - but this just removed any writerly voice I had. I am personally not interested in using AI to generate texts, game systems or min/max anything (except maybe my programming).
LLMs and Society
For the societal viewpoint, on the other hand, it’s important to keep in mind that society is not a person, and it doesn’t make decisions. It doesn’t have intentions, and you can’t actually interact with it directly. So making demands on society doesn’t make a lot of sense. It may make some sense to demand change from politicians, but politicians are not society.
This text is not about changing society through political action, though. It is about exploring what could be a way to live within our current situation that neither loses sight of the complexities of life, by proclaiming a set of maxims, nor throws out the baby with the bath water, by being a cynical, egotistical jerk. The former leads to a kind of “purity discourse” that doesn’t help any real person. If anything, it may make you feel bad, if you can’t live up to the manifesto’s demands. And the latter lives in a vacuum where nobody else matters, which is mostly sad and infuriating for anyone with a heart.
So instead, I would want to explore what it would mean to live in a society that expects its members to live a certain way, and therefore its members tend to err on the side of fulfilling these expectations. This doesn’t mean that if you fall short of society’s expectations, you aren’t a member anymore, just that its members may view you less favorably. But it has nothing to do with being in or out.
People who fall far short of society’s expectations, - e.g., those who commit crimes, may be punished for it. From fines to prison, being punished for bad behavior still doesn’t make you not a part of a society. In other words, I am interested in figuring out a baseline expectation level towards the use of LLMs that you can at times overstep - as long as you don’t go too far.
Therefore, the question “Are LLMs a positive for society?” has to be restated as follows: Answer the question “How would I like society to behave with regard to LLMs?” and then live like a person who lives in a society that answers the question the same way.
Note that I don’t touch LLMs, the technology or product, at all. I can imagine LLMs being regulated, turned into a public good, or whatever, but that part is not in my control. Imagining things along the lines of how to change LLMs is therefore imagining the wrong thing for this purpose here.
Here, we are interested in the citizens of a society to figure out a model of behavior we can follow that would work “scaled up” (that is, for more than just me or the privileged few). I don’t want LLMs to be used for frivolous things - using water-guzzling LLM-APIs to water your own houseplants, is a real-life example I came across in recent months.
But since I imagine myself living in a society that is cognizant of the associated costs of using LLMs, people would mostly not do these frivolous things with LLMs because it feels morally wrong to them. This scales well. We may here and there use LLMs for frivolous things, but we aren’t overdoing it because we are not blind to its cost. And we wouldn’t establish whole automation workflows based on LLM-APIs for automating watering our houseplants while the data centers running these LLM services are guzzling fresh water like nobody’s business.
I would want it to be used for beneficial use-cases like programming. By this I mean I would let LLMs help me generate boilerplate code and use it as an interactive rubber duck. I don’t mean that we would replace junior or mid-level developers (such as myself) with AI assistants. We would all know that it doesn’t work in the first place. We may try to run small-scale experiments here and there to see if we could use AI as a developer replacement, but we wouldn’t try to actually get rid of workers to make more profit.
Since we are all into reading, writing and expressing ourselves through our unique voice and not through a statistical middle ground generator, we prefer and support people that express themselves by doing the hard work of actually doing so.
We would, however, try to stay open-minded. There may well be a time when an LLM can be beneficial for automating the rote parts of personal knowledge management, for example. So we may periodically try these things out.
In short, We would all stay curious about the technology’s potential but would make sure that we were not building our whole lives around them. We wouldn’t automate tasks with them that we don’t have to and above all else, we would prefer a unique voice when expressing ideas, feelings, and viewpoints over the statistical recombination mishmash. If it matters, we do prefer the human touch.
That being said, We will all fall short or disagree once in a while with one or more of these preferences and expected behaviors. And - within reason - there is nothing wrong with that. Life’s complicated.
To get back to the question whether LLMs are a positive for society, the answer depends entirely on what we are using LLMs for. Right now, it seems to me that LLMs are mostly helpful in a programming context (and maybe a handful of other use cases, although I have no first-hand experience of that) and they are regrettably useful - or at least used - to try to scam people, make workers lose their jobs, threaten the environment and drown out the beauty and uniqueness of everyone’s voice.
Depending on our use and what we expect others to use it for, it can be at least a neutral thing. Or a somewhat-negative-but-because-of-the-upside-accepted thing. Like a small, reasonable car.
This twitter post made the rounds recently:
“You know what the biggest problem with pushing all-things-AI is? Wrong direction. I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes.” — x.com/AuthorJMa…
And I think it encapsulates beautifully my sentiment: Use AI for the right things and there isn’t really a big problem with the technology.
DailyDogo 992 🐶
DailyDogo 991 🐶
DailyDogo 990 🐶
DailyDogo 989 🐶
DailyDogo 988 🐶
DailyDogo 987 🐶