#100DaysToOffload - Review: Cal Newport's "Slow Productivity"

I like Cal Newport’s writing - although the person itself is also slightly strange to me: who would mention Joe Rogan’s mike choice as an example for something? In any case. Some good observations on the broken idea of measuring knowledge work productivity like assembly line productivity and fake bussyness.

I noticed that he seems to like to read biographies of knowledge workers in a broad sense and mines them for examples to illustrate what to do or why some things are or need to be a certain way. I may have to try to read more biographies myself. I love memoirs as a genre (I should read more of them), but never gave biographies a chance - because I am kind of allergic to the exceptional in general (but super-curious about the mundane).

The contents of the book are just confirming my feeling that slow and small is the future. As I express also in the end of this post here, for example:

[M]y own life’s plan: A (relatively) small but reflected life in the here and now is more rewarding, more livable, more rational, more emotionally honest and also more ethically sound, than any sweeping pronouncements of a “big life"™ could ever be.

My recent personal manifesto was inspired in part by reading this book.

It’s a quick read (or listen, in my case), apart from the Joe Rogan thing (and it’s not that that mike choice example was offensive, it just was weird and suggests maybe what podcasts Newport listens to? Maybe? Hopefully not? Hopefully not in earnest at least?) quite inoffensive.

People who know Newport’s books will most likely have heard many of those tips from him before, but the reframing in terms of slow productivity is interesting.

#100DaysToOffload - Overview

  • Last updated: 2024-07-04 - 18:40

Similar to my overview for the WeblogPoMo, here’s one for #100DaysToOffload. I won’t be adding all 365 days here, yet, but will add dates as I see fit.

#100DaysToOffload Manifesto: Limiting Projects To Not Be Limited By Them

(Having recently ranted against Manifestos, I had a reason to formulate one of my own. I hope that this personal manifesto is more productive than something like the Manifesto for a Humane Web because it is only meant for me, actually includes the steps I’ll take and doesn’t need to scale in any way.)

So, let’s just start with the obvious: I’m here to write and post dog pictures. I am a programmer with a background in the humanities. But I’m unable to devote real time to any side projects apart from writing here. So I made a decision. No more projects. Actually: Way less projects.

  • I had the idea to update my crossposting workflow: that’s not going to happen
  • I had the idea of developing my own blot.im inspired micro blog sync client: I am not going to pursue that any further
  • I had the idea to develop a JavaScript-based DailyDogo viewing widget: I’m sorry, but it ain’t happening either
  • I am not going to create Newsletters, Websites, Blogs, Courses, Apps, Extensions, Plugins, CLI-Tools, Videos, Podcasts, Streams or anything else that could be considered content. I will write here and in my notes system. What I will do on my blog is my public persona on the web and what I will write privately will help me realize my potential. The important part is the writing. Not so much the design of the website. I will let the latter go.

I am going to write regularly in way that I find challenging and engaging. I will try to be vulnerable, I will try to not hide behind a veneer of safe agreeable stances and topics.

I will try to limit and down-size non-public personal projects and ongoing areas of responsibility as well:

  • I am not going to implement a manual notes and hightlights reviewing workflow: I just don’t have time for that every week. I review what I review. Readwise saves the highlights I made to my notes system automatically. That ought to be enough.
  • I am not going to commit to reading/listening to a certain amount of books this year: I read what I read.
  • I am not going to track, measure, document, visualize, or overly plan my life anymore. My ToDo-List is simple. If I can’t do what’s on the list for the day without overwhelm, I will remove items until it has become manageable again).

I am going to continue to write in private, incorporating my love for the abstract and trying to develop ideas and coming up with ways to live the small, slow life in a world that is on fire and probably will stay on fire for the rest of my lifetime.

I will try to embrace to do less things at work, too.

  • I am not going to see every opportunity for change in the company in general as a must-participate for me.
  • I am not going to let technical purity concerns block me from doing a good job.
  • I am not going to say yes per default to every challenge to my estimations and ways in which I’m going to approach the task at hand either.
  • I am not going to react immediately to any and every message in my work chat.
  • I am not going to ignore my scheduled time blocks to do focussed work or anything else I had planned.

I am going to try to hone my craft, with an eye for quality, architecture and pragmatic professionalism. I am going to take advantage of the 4 hours per week that I am supposed to be able to use for learning and growing as a programmer.

Hub "The Fatalistic Turn"

Changelog

  • 2024-05-31 - Created this note
  • 2024-06-09 - Added a post “End of the myth of rational public discourse - The example of climate change”
  • 2024-10-30 - Added a post about an Essay by Andrew Dana Hudson suggesting that dreaming about space is not dead, but has to change and incorporate the current situation

Note

See Post Hubs for an explanation of these kind of posts.

All this is to say that: We will have to live with it. We will have to accept climate change. We won’t be able to stop the catastrophe. All the displaced people. All the pain and suffering. All the biodiversity loss.

What is interesting though, is that fossil fuels won’t last forever and the world’s overindulgence in a surplus of energy that is not bound to the solar energy system (as opposed to the fossil energy system that spurred much of industrialization) is inevitable. We will not live to see this, but we also won’t stop the shit show until then. The planet will go through this. I don’t see how it wouldn’t.

  • The Fatalistic Turn
    • The main article for this topic. It tries to give some reasons why we won’t stop climate change. At all.

The following are part of this thread in my thinking, either by being material for arguments within that main post or by using parts of that main post to argue in some way or another for or against something else.

An early example of my stance against hoping for large-scale change of society:

It seems to me that one way forward is taking a longe duree view and invest into myth making:

Post Hubs

Changelog

  • 2024-05-31 - Created this note
  • 2024-11-04 - Added “Maintenance Romanticism” Hub

Note

A hub is a new idea of mine to resurface and connect my blog posts that more or less have something to do with a topic. I’ll collect posts of mine - maybe posts of others? - that fit the topic of a hub.

In comparison to a category, a hub allows for giving some commentary and is supposed to grow and change over time.

  • Similar concepts in the pkm space: MOC - Maps of Content

Hubs

#WeblogPoMo2024 - Thoughts on "I Like Your Blog If..."

I love this post by Lou Plummer, especially this part spoke to me:

I like smart and smart-ass but not people who think themselves smarter than everyone else There are a few bloggers who consistently write about how dumb people are and it’s a big old turn off. I like smart people. I like people smarter than me (not hard). I even like people with a smart ass sense of humor but I have worked for too long with stereotypical computer support people who think all end users are stupid and I’m so very weary of that attitude. I think it’s great to point out the misconceptions of others but it’s boorish if that’s the main thing someone writes about.

Maybe because I’m insanely self-conscious of what was my main output throughout the WeblogPoMo - mainly fatalistic and negative views on political change and some posts about an outburst of anger about a kind of naive take by my blog hoster on the whole OpenAI/Scarlett Johansson thing that I regret (my angry reply more than my more sober take later. I still think I have point, though), I also immediately thought: “for some people this could be me, maybe…”.

Anyways. A great post that is well worth your time.

#WeblogPomo - Intentions Are A Lag Measure

[...]

The big question when it comes to shifts of society like you describe with your examples for me is always: How much can you actually do? My reflex is to say: Not much. Intentions are a lag measure. That doesn't mean we shouldn't support what we think is important to create the world we want to live in - even if it's only for our own sakes - and support the people and actions the seem to us necessary to make these things more likely, but I do think this stuff is merely necessary but not sufficient to change the course of the world at large. So we'll have to live within the world in which we live, warts and all.

This implies to me that what follows from your observations is no ownership of cause and effect but recognition of our collective limitations. I don't think that we make actual choices in that way as you hope.

I said this - slightly edited here for brevity since not everything is relevant - in a reply to Jason Becker under one of his posts and wanted to shortly explain what I mean by that.

It seems relatively easy to fall victim to attribute to intent what is simply an expression of a system. Since we can’t observe society as a whole we may think that there are intentions involved in the sense that you make a conscious choice to behave in one way or another - in this case it was about having the feeling of not being able to give a complex take because an audience nowadays demands a simple take - and any members of the audience as well as the one talking to said audience has lots of freedom in how to behave here. Rational discourse then can decide the best course of action. Critiques like this suggest that it could be different if we would just choose to behave differently (and here some good reasons for doing it or alternatively some bad reasons to no do the old thing anymore).

Now, the text I replied to here is called Takes spread like wildfire. What I love most about this blog post - and I commented on it before actually - is the title, because it doesn’t assume that anybody is doing anything on purpose. It just happens. This I totally agree with.

I’m generally very interested in the make-do. As in: “I was made to do this”. As I don’t assume intentions I also don’t assume clear-cut flows of cause and effect. I believe in entanglement. Meaning: There are a million little things we are connected to - human as well as non-human actors - and all of these act as weights on our doing and being. In a sense we are these weights. Part of these weights are internal. I do think we have to assume consciousness. But internal or external weights: we are a result. And if we are results then so are our actions. Except that our actions are not the result of us because we are just a part of the chain (or better: net) through which something like a society expresses and reproduces itself.

As an individual that has the capability to learn and observe, we may never untangle any of the entanglements that surround us in real time. But: We may learn to readjust our internal weights, so to speak. And this in turn is observable by people who care. This will never be everyone. And even those who care may not be able to see. But there is a potential here, which lies in a convincing performance and not rational argument.

Which is why I would say that performing a certain way of being is probably more likely - and let’s be clear here: it is still far from a clear a->b thing and not highly probable in the slightest - to inspire others to act with this new way of being in mind. What they do with it individually and as a collective is absolutely not in your hands, though.

So if it is about subtle takes: Make them! Realize a person - make a person a reality - that makes interesting, subtle takes. They may spread. You may get critiqued. People may want simpler answers. If it is about politics they may sort you more to the right and less progressive than you’d think. If you can’t realize a person like that - because it might bring harm to people you care about - that is society protecting itself. Maybe make only some takes subtle that don’t endanger what is vulnerable if a person like this would exist?

#WeblogPoMo2024 - Thoughts on CoreInt 600

Listen to it here.

The whole episode was about the recent OpenAI/Scarlett Johansson thing that blew up in Manton’s face.

I, by chance, was at the computer when it was published and curious as my first take on why I was angry was muddled in said anger and so I listened to this immediately. I found it very interesting and very human (in a good way). A little protocol of mostly Manton’s views:

  • His initial take was: The sky voice was not intentionally ripping of Her.
  • The response felt like: Some People were angry with him because he was defending a disgraced company.
  • So to restate it: A plausible take was given.
  • A tweet: “her”. He didn’t even think when giving that take that the tweet could be about the voice per se.
  • An interesting thought experiment: What if we’re talking about a war criminal. What if they used a ridiculous weapon that doesn’t exist to do their war crimes? If somebody claimed that. Shouldn’t we able to say they did not use a ridiculous weapon that doesn’t exist?
  • And then: A news article seems to vindicate the plausible take.
  • But then: Does the news article actually do that?
  • And: There still might be a legal case. It seems very possible.
  • A plea by the cohost: Give it some rest. It’s not Manton you’re blindly angry with.
  • AI is here to stay: We have to engage with the technology.

I think we’re all still guessing and may never know. Depending on what we take into account, some things seem more plausible than others.

I stand by my slightly more sophisticated way of putting it after I fired my first shot. I will give it another try here: Not taking into account what else is going on with that industry and that company in particular - their track record so far - and not taking into account how important a Sam Altman seems to think he is - right up in there with Dorsey and Musk as the co-hosts seem to agree - makes Manton’s plausible take based on circumstantial evidence kind of less plausible (which also renders the thought experiment unsuitable). And I wouldn’t want to have the feeling that the person behind my blog hoster is not seeing this quite plausible connection. Isn’t there this saying “where there’s smoke, there’s fire”?

So saying Manton makes me look bad “by association” is surely putting it way too strong. I will own that. But I will say that subsuming all the critical voices under the same umbrella seems inappropriate. I think - if we are fair to me and some others - that it seemed a little naive to not think about the smoke and coincidentally call everybody else just other, less rational, angry people (or at least strongly implying it) and not at least contemplating the possibility of a backlash.

Alright. That has been said. Uff.

I also want to say, candidly, I do love the idea of the audio narration feature that seems to have come out of reflecting on this. This is good. AI is indeed not going anywhere. And we may never fully reconcile our feelings about this issue. But that doesn’t mean we need to not have human connections or aren’t allowed to make the web feel better.

#WeblogPoMo2024 - Thoughts on "Takes spread like wildfire"

An interesting post by Jason related to my recent discussions of manifestos and the general trend towards simple moral purity based statements (check this post and its links at the top if you’re curios). Some excerpts from Jason’s post:

Communities converge on an understanding of how they are supposed to feel about something very rapidly on the internet. It seems to take no time at all for influential voices to emphatically determine what views are Good and Right and what views are Wrong.[…]

This is not all bad.[…]But it does mean that there are many things that are not safe to share. […] [You can’t] “try out” an argument [anymore] or even an identity to see how it feels. […] It also means that sometimes when your peers and people you respect have all decided what the “right” view is, it’s very hard to comfortably express a less strident, more lukewarm, more timid, and possibly more complex or nuanced take, especially if you’re not ready, willing, and able to present a dissertation about your view point.

The way I’ve chosen to operate in this environment is to listen to the intensity of others. This almost always means one of two things:

  1. I will end up agreeing with them, but for various reasons, I need to listen more and more carefully to be convinced. My own mind and emotions take a lot more evidence to get to the same conclusion my peers made it to right away.
  2. Folks are jumping on a bandwagon and squashing nuances and loudly proclaiming the easy thing. Anything I add to the conversation will drain me of all kinds of energy, likely ending in the person I’m talking with claiming they held the same belief that I do the whole time. In both of these cases, I don’t need to speak. I can just listen. And eventually, I can decide that if we’re not heading toward the first case, I can stop listening. I can just opt out. It’s not a conversation, it’s a signaling competition.

I like this, even though I have my gripes with some of this. Not all of my notes are direct responses to Jason, but general thoughts to an imagined reader that tries to understand the implications of a post like this.

  • Trying on arguments and personas can be a highly questionable practice. People may get hurt. So the web today doesn’t owe you this. Doing this is at least in some ways related to tricking people. But so is telling stories without disclosing you’re telling stories, of course. I think it’s possible to flag posts that do this as experimental thinking or whatever. See Maggie Appleton’s Epistemic Disclosure.
  • More complex, subtle takes are not a problem in and of themselves, I’d say. A richer description of a situation can be very interesting and enlightening. The problem is often not the complexity of a take, but the take. For example: I can write a complex text about my feelings around climate change in which I deny that there is trustable evidence that climate change is even a thing. The incendiary part of a take like this is its general thrust (for people who care about climate change). Also: Subtle takes don’t magically free you from being misinterpreted/misrepresented either. Your text may end up being a completely reframed tool in somebody else’s texts. You can’t do anything against this. And you never could! But that is not a new development.
  • Not knowing where you stand yet is generally fine and can be super interesting (because vulnerability is interesting), as long as some preconditions are met: You are not claiming to not know where you stand but actually just use it as a defense for a questionable position. Again, you can flag that stuff appropriately.
  • As nobody owes you being okay with just letting you try out an opinion you actually don’t hold on them, nobody owes you not being criticized for what you put out in the public. In the best case you’re part of a community that will protect you and enforces a certain code of conduct and hopefully has values you can agree with, but the greater web doesn’t work like that because it’s basically social wilderness. That means the further your reach the more it is likely that you will encounter pushback.
  • Everything’s questionable. Using “The facts™” is often an attempt to state something as objectively as possible without realizing that its factualness is the result of negotiations. See Latour’s Modalities for a great concept handle for this.

#WeblogPoMo2024 By Association

So Yesterday, shortly before bed, I got angry about what somebody else wrote on the internet.

[@manton](https://micro.blog/manton) This is such a bad take. Why do this? You make me as a mb user look bad by association. I get that you're trying to say that it was wrong and trying to say that people are emotional about this. But what you're also saying - inadvertently perhaps - is that Altman (et. al.) didn't do it on purpose, because he's obsessed with the movie Her and therefore people should have not freaked out or, if anything, freaked out earlier since that voice was already out there for a while?

And what does it even mean to say that their account is not a total lie? Does it need to be a total lie?

Why protect a company and a CEO like this? I myself am using AI here and there and I think it is an interesting technology and probably here to stay (for better or worse), but there is no need for "both-side-sing" it here.

This was a reply to a post by Manton - the person behind my blog hosting service Microblog:

When your company becomes the enemy, all that matters to people is what feels true. OpenAI’s Sky voice shipped months ago, not last week. We hear what we want to hear. OpenAI mishandled this, no question, but most likely Her is ingrained in Sam’s head vs. intentionally ripping off Scarlett.

I had innocently scrolled through my Mastodon feed and saw a couple of posts scroll by:

I am not above admitting that I got angry, in part, because other people got angry. Social media is a seductive medium. I was tired and it was easy to fire off a reply like the one I did. It felt righteous.

Now, I had written very recently about my interest to not play this “moral purity”-based game anymore (not that I ever really played it, but that is besides the point), in which we proclaim a certain world view or stance as morally superior, avant-garde or whatever and start to judge what’s happening in the world. My point with this was and is that we ought to construct and view - or at least make possible to trace - the complex network that makes up the state of any (local) reality in our moment in time. I want less reductionist views (although I freely admit that heuristics and simplifications and abstractions are important actors in a text and are not to be ignored either) and more connective tissue between manifesto-like expressions and details and steps on how to actually scale that for the planet or even manifest it just for my local reality, here and now. I am a skeptical person and am always a little suspicious, if people proclaim stuff like “Just don’t use AI!”, “Just don’t fly!”, etc. because all this doesn’t make sense if for example - I speak for myself - AI is helping me paper over gaps in my programming knowledge. I am dependent on the job I have so I better have some understanding of how to use this technology. When my boss says “your estimate is too high, ask ChatGPT how to do this and you’ll be quicker…” I of course have a thousand counter arguments in my mind but in the end I am not studying philosophy, management theory, or whatever. I am employed as a programmer and am expected to do my work in a way that is (seemingly) most efficient. And I have to admit that AI is actually helping me, too. I think. It’s in any case an interesting technology that I should know how to use if I am to engage in meaningful discourse about it. So much so, that I am using these tools also in my free time here and there. I also happen to have family in Germany and am living in Finland. Therefore I will fly more often than others (at least twice a year, probably) to see my family members.

So I think I am allowed to say that I get it. I get the need and the want to express something more complex than “Good is defined as X, you’re not X therefore you’re not good.”

However, taking on more complexity when doing a take, does still not excuse you from also recognizing power structures. Many people are outraged over OpenAI’s handling of a voice very similar to one that sounds like Scarlett Johansson’s. I am not very hopeful that an investigation would ever conclusively show that OpenAI just cloned her voice or didn’t employ a voice actor that sounds similar to her because that person had a voice similar to Johansson’s. But it seems questionable to not assume some form recklessness or even ill-intent here.

People who have the bandwidth for it are worried and annoyed by tech bros and Big Tech. And often for good reasons. It’s not like there is no evidence. They feel violated by them being invasive and exploitative. This is where most manifestos in this space come from, I bet. So a case like this - which to most people who can stomach engaging with this part of the AI hype is just another in a growing list - is to be taken seriously as a prime example of what people despise about companies behind AI.

I am angry about posts like the one I replied to, because it gives more complex, maybe more subtle view points a bad reputation. I do not believe that more complex takes are created equal. But it can look that way sometimes, which is why complexity has become a watch word and moral purity so attractive. “Both-side-sing” is a terrible excuse for a careful, nuanced take, though.