Dev Notes

A Limitation Of Laravel's Seeders - And Why It is There

Laravel has pretty cool tools to work with databases and fill them with seeded data. This is useful in many contexts, but mostly if you want to test things.

Seeders are cool, because they can make use of model factories - or factories for short - to create some test data the conforms to what we consider valid data.

However, the combination of seeders and model factories has a limitation and that is, that seeders are not meant to know of each others' results. So if you have three tables - users, blogs, posts for example - that have some kind of relationship to each other and would like to have individual seeders for those tables, then you can’t use factories in those seeders without some extra work currently.

Note that factories do not have this limitation. You can use recycle on a model factory and use the resulting model in a subsequent factory call. This even works with indirect relationships, like a post that belongs to a blog and a blog also belonging to a user (and therefore the post belonging to a user). This short youtube video explains what I’m talking about:

So that’s pretty cool and all but if you want dedicated seeders for different parts of your database you’re still out of luck. Why would you want different seeders? Because you may want to separate the generation of seed data by entity. Sure, you could create a user which in turn also creates a blog with a handful of posts in it, but maybe you’d rather create a user, a blog and some posts and bind them to each other afterwards.

Separating the seeding procedure in this way will help you later, too, you might think, because if I ask you where posts are seeded, the answer could be “in the PostSeeder, duh”, if we could use that convention.

We would need to make it possible to receive the results of a seeder, but there is a problem: A seeder is supposed to create more than one row - or at least it should be possible work that way. So in other words: If we want to create a handful of posts per blog and we create one or two of those per user and we also create three test users (let’s say), how are we supposed to relate all of these disparate entities to each other even if we could take the results from one seeder with us to the next? How do we disambiguate the instances of the created entities? Which one of our 2 blogs should be connected to the 10 posts? Should it be five and five? Two and eight? How do we communicate that?

We would need to explicitly set specific entities created by the factory in the other seeder on the factory in the current, but it will become confusing rather quickly, because factories might also have states - like a user who is deactivated and so forth. And as I said, if we create more than one, how do we pick an entity?

Since it’s not easily possible to tell the next seeder what the previous seeder has created, having lots of seeders - e.g. one for each table - doesn’t make sense. We have to take the route, where we handle creating related but different entities in the same seeder.

Another practical problem comes from the fact that a database seeder doesn’t return its creates models, only the seeder classes it itself called. In order to still go the route of keep entities in their own seeders, we would need to enhance the basic database seeder classes Laravel provides. So from that perspective it seems that the framework itself doesn’t intend you to separate the creation of different entities using factories out into different seeders.

P.S.: It is kind of weird then, that when you create a model that you are also given the option to create a database seeder:

A screenshot of the artisan make:model cli command, showing that it allows to create a database seeder when creating a model.

This seems to suggest that maybe there is an intent to create seeders for every model (or entity), but as I tried to show, this is not practical and the seeder classes themselves do not really follow that logic and do not afford you the functionality to communicate between different seeders out of the box.

P.P.S.: What you can do is communicate through the database, I guess. Meaning that if you create entities in one seeder - using a factory (or not) - you can of course read those rows from the database and relate different entities to each other by retrieving them from the database. But you’ll lose the elegance of expressing these relationships through the ORM and the recycle method.

#100DaysToOffload Reflexions After Two Weeks Of Having Accepted My Ineffectiveness

(I started #100DaysToOffload on the first of June and have written - not including this one - 12 Posts. I guess that’s enough to make it? About a third of my days are spent pressing publish on a blog post. So far so good. Before I finished this paragraph, I was thinking that I was too slow …)

I had reasons to maybe reflect more inwardly for a moment, which is why I didn’t write for a couple of weeks or so. Things at work didn’t go so well, which was mostly because of me being told I was slow (and having been shown incontrovertible evidence proving this) and having to learn and having to un-learn and to accept a new truth about myself: That I am being slow and me being good (as good as I am which is good but not evenly good and certainly not fast) at my work was and is mostly a result of being slow and willing to make up for it by working longer. Which is not a mode of working that is economical and that pays the bills.

In a big company this might not matter as much. You’re embedded within a team, within a project and within a department, which will protect you somewhat from being directly exposed to your impact on the company’s bottom line. But the smaller the company the more directly the way in which you’re contributing to the value that is being created by that company is noticeable. You’ll feel it way quicker and more directed towards yourself if things are out of order if it happens to be cause by the way you work. In my case being called ineffective repeatedly over my first year at the new job was both shocking and accompanied by disbelief and dread. But I only needed to look at my estimates, the time it actually took to implement a feature and how many of the hours the company could actually charge the customer and compare that with my coworkers. The feedback was real and objective. It still took me almost a year to accept it.

As you can imagine: Realizing that you’re slow and ineffective had and still has all sorts of cascading side effects. I reexamine what I believe to know about myself and what I think is true about the nature of work and live in general. Contemplative sense-making has become more of an important part of a good live for me than the only way of being. I notice others that are slower and see companions. Like my barber, for example. And I appreciate that she takes her time. Whereas before I was sure that any criticism of a slow life is just an uninformed un-thought-through opinion, I now come to recognize - intellectually, if not yet experientially/emotionally - that you can maybe also want to be effective instead (or in addition). I still bristle at the insinuation that living an exploratory, tranquil live has anything to do with wasted effort, but I will have to somehow figure out a way to combine economical pressures with the qualities that make a live, no, my live, worth living: Reflexion, focussed attention, taking my time, craftspersonship, exploration, deep understanding, empathy, diversity. I kind of still don’t know what this all means.

I’m willing to find out, though. And what I do understand: That not being effective is what holds me back in my career, maybe in live. So I will have to go through this crucible. I am not making money in a contemplative career - I am no writer, academic, critic or whatever. So I can’t just turn around and proclaim that a fiber of the reality I live in is somehow “wrong” as if there is a choice to be made by me. I may bring myself to call it a negative or bad aspect of a modern, digital, data-driven and market-oriented society, but this still means I’ll have to live with it and master it to the best of my abilities.

#100DaysToOffload Hub "Efficient Programming"

Changelog

  • 2024-06-23 - Created this note
  • 2024-07-04 - Added “Reflexions After Two Weeks Of Having Accepted My Ineffectiveness”

Note

See Post Hubs for an explanation of these kind of posts.

The efficiency part is where my main problem lies: In order to deliver on time I will need to learn to cut corners and leave messy code as it is and even add my own mess on top of the rest at times.

This doesn’t seem to enable delivering greatness. After having read Slow Productivity recently, that has a completely different philosophy about work - “do fewer things”, “work at a natural pace”, “obsess over quality” are its main points - this job stands at odds with this philosophy (that I am whole heartedly agree with).

I now think that this is not the correct framing. It’s reasonable to assume that I will work for the rest of my career in situations that demand - more or less - efficiency. “Delivering greatness”, in part, is about making it happen under economic constraints. Entrepreneur or salaried worker: This means that I need to be a good investment to be allowed to work on whatever I deem “high quality greatness"™. — Quality And Efficiency

This post was a first attempt at framing the problem of being an ineffective programmer. Before this one, I wrote a very emotional one that was mostly fueled by fear of getting fired for taking too long at work:

It sucks to admit, but I am pretty slow, when just measured on delivered features compared to many others. The main reason is that I am unable, maybe also unwilling, to “just deliver”. — Just Deliver

I am now trying to figure out what makes me inefficient and how I could become more efficient.

After two weeks sitting with the problem I think my framing of overcoming my own ineffectiveness is correct: I will not be able to escape it and it’s good to challenge myself:

#100DaysToOffload Categorizing Code Changes

In an effort to become more efficient, I wondered how I have tackled issues at work so far:

Here’s a little categorization of how I approach different kinds of code changes.

(Disclosure: I don’t do this literally, but intuitively …)

First, I ask myself if the change is trivial or not. Is it just something really small and contained, like changing a constant or what is returned from a method? Or a small visual change? This is so easy that I generally only do that change and nothing else. Maybe I add a type hint or update some kind of documentation if it exists. But that’s it. Hard to get more efficient here.

If the change is more complex, I subdivide this category immediately into two subcategories: is it something entirely new or is it a substantial change of existing code.

What makes these two cases different is the kind of analysis (and implementation; see below) I’m doing. If it’s new stuff, I try to imagine what a good, modern, sustainably maintainable version of the app or website I’m working on might be in general. What would be a good, extendable starting point? Very often the tech stack is set, but the question becomes how to use what is given in the intended/best™ way.

For example I recently was asked to implement a CRUD-App as part of the admin interface of a website. After some probing on what the appropriate tech stack might look like - since I hadn’t yet done a greenfield implementation in this project - I knew what tech stack template to fill out. I decided what layers I needed, what build system to use for the frontend, how to bundle the backend code, how to test the code and how to run those tests in a pipeline on push. Some of these things were realized easily, others took longer, for some of them I had ideas for what modern best practices looked like, for others I had to research them. Because the system this crud app was a part of monolithic “portlet” system that subdivides the contents of a page into reusable widgets, we had to take this into account, even though we didn’t need that much complexity. In the end, I built a relatively modern portlet-compatible crud app. While implementing the core features I learned more about the entities we were dealing with and how they connected to the rest of the website, which meant doing revisions of the main model a bunch of times and it turned out that the JavaScript module had to be relatively complex because we wanted some interactivity that I knew about but hadn’t thought about hard enough beforehand. So yeah. Lots of extra work upfront to make working with this crud app nicer later and a good amount of revisions during the main implementation phase of the mvp to fulfill the architectural and quality criteria I had set for myself.

If it’s a complex code change the cuts across lots of different already existing modules or classes, the question is more how we can make these classes friendlier to accept change in the future in general and how my changes could be implemented in such a way that they themselves can be changed easily. So not only the integration code but also the feature code should be extendable, readable and hopefully easier to reason about after I touched it.

Another task I had was to repair and update a multi-tenant system’s copy tenant functionality. This was a complex task that cut through a big chunk of code. It was extra complex because the legacy system had some quirks you needed to know about and was very hard to test, because some of the classes were autogenerated and copying a tenant took minutes to complete. After a long and sometimes arduous process of trying to understand how the system worked and how its data was organized, I reimplemented the copy tenant functionality that had previously been based on copying its mostly hierarchical data on assumptions that didn’t hold true anymore, so that it would not presume things that would lead to broken tenants, fixed a handful of bugs and unexpected behaviors on the way, but also made sure that the button “copy tenant” basically did what it did before, only now finally correctly. I’m not going to lie: I had tons of help to accomplish this herculean task and it’s not humility that makes me give credit for most of the actual implementation in the end to one of my lead programmers. Big chunks of time were spent understanding the legacy code, figuring out where it went wrong and figuring out where and how to put the corrected code in a mostly non invasive way.

So both kinds of complex change consisted of chunks of implementation and blocks of analysis. Both types had some work that was extra in some way. So let’s look at the details of that.

The implementation of new software consists of creating lots of new files, folders and “hooks” that connect the new to the old. We can always immediately work within best practices: Clear separation of concerns, no spaghetti code, native type hints throughout and tests from the start to name just a few.

Implementation blocks in already existing code of varying degrees of decay is different. We talk about refactoring and just making a sensible “hole”, for us to fill with new features. We may need to change what a method returns, introduce a new service, rework an existing class, by adding properties, methods or making it conform to a new interface to make it more easily replaceable, change were business logic is located, accessed, or what it decides in what way. There are many ways to work with existing code. Some of them are about making the aforementioned hole for our new logic and some of them are about making it easier to comprehend the legacy code. I have yet to work in a project that actually took the time to do systematic refactoring that goes beyond just making a thing work. It’s all rather piecemeal.

The analysis parts are also pretty different: The new stuff consists mostly of understanding the modern best practices in the context of the concrete implementation I’m supposed to do. How does Vite want me to work? How does a modern Symfony bundle look? Should models have fluent setters/getters? Should models be immutable? Many of these kinds of questions are about how to bring ideal conditions into my concrete implementation. On top of best practice questions are questions about how things work. Like Vite for example. It took me some time to figure out how Vite’s library mode actually worked. How Vitest worked. How and why modern Symfony bundle practices wouldn’t work within the context of our legacy system, etc. The last example is also a good example of how we have to figure out how these best practices and technology options out there in the world actually map onto our concrete case. Because sometimes they don’t and we have to figure what that means for the project.

Analysis of old code often means just groking what it does and how it can be changed. If you know nothing like I did when I started working on the copy tenant functionality you may need to figure out the bigger picture as well as the details of the thing you’re actually supposed to change. This has many dimensions to it. From guessing intentions of programmers that came before me, over parsing of algorithms and business logic, following call stacks and simply figuring out where to look to understanding the data that flows through a given system, there is always a lot going on.

When it comes to complex changes especially, but also trivial ones can exhibit this, there is always an amount of extra work involved. This extra work might be part of the analysis blocks or part of the implementation blocks.

This extra work could be adding a code quality tool. Or it could be cleaning up a a method, while making sense of it. Or it could be taking the time to really get into understanding the details of how validation is done. Some of this extra work makes essential work easier. Most of it makes the essential work more fun, I find.

Analyzed like this it becomes more clear to me, that part of my problem of being inefficient is that I enjoy the extra work and that I am kind of stubbornly hoping when embarking on a new programming adventure that the extra work could be my main work. In other words: I am behaving irrationally!

Going forward I’ll have to limit the amount of extra work I do (e.g. less fucking around). That has to be part of the solution. Another point is that figuring out what to do and not fucking around and explore has been traditionally hard for me. I think (hope) that writing things down in a composable way using my trusty notes system will help me figuring out things faster, since I will be able to rediscover what I already figured out before.

#100daysToOffload Quality And Efficiency

What does it mean to do well at work? As far as I can tell, it means that you do the work reliably and do it quickly (or rather: efficiently) and do it in a appropriately qualitative way. For programmers that means delivering features, preferably within the given estimate (allowing for some margin of error and the ocasional outlier).

So: Reliability, efficiency and quality is what can be used to judge a programmer. If would need to judge myself, I think I have the quality part nailed down. I am not afraid to tackle structural problems in a legacy system, do refactorings and take the time to remove technical debt. However it’s the other two parts that I’m lacking.

The reliability part - meaning that I deliver consistently - is not as problematic than the efficiency part. This translates to me being able to deliver the same high quality in the same inefficient manner.

The efficiency part is where my main problem lies: In order to deliver on time I will need to learn to cut corners and leave messy code as it is and even add my own mess on top of the rest at times.

This doesn’t seem to enable delivering greatness. After having read Slow Productivity recently, that has a completely different philosophy about work - “do fewer things”, “work at a natural pace”, “obsess over quality” are its main points - this job stands at odds with this philosophy (that I am whole heartedly agree with).

I now think that this is not the correct framing. It’s reasonable to assume that I will work for the rest of my career in situations that demand - more or less - efficiency. “Delivering greatness”, in part, is about making it happen under economic constraints. Entrepreneur or salaried worker: This means that I need to be a good investment to be allowed to work on whatever I deem “high quality greatness"™.

I’ll have to learn to actually make hard decisions, because I will need to make more trade-offs. I will need to get an intuition for when I can cut corners, what corners can be cut, the different ways in which corners can be cut, how to argue about corners and trade-offs and so much more.

If you’re just looking at it from a “best practice, always” standpoint, these things seem not necessary to consider, but I’m now convinced that it’s actually efficiency + quality more than quality on its own that could bring me to the next level in my career.

Stressful week at work. I wrote an emotional post last Friday about some of my worries. Finally some relief today. I too have still to learn and will make mistakes. And learning is not a linear process. But knowing that my employer hass my back made me realize how much of my worries were not real.

#100DaysToOffload Just Deliver

I had a hard day at work. It was hard, because I ended a streak of about three days, in which I was unable to work productively and today was the day in which I finally had to “show my work”. And then nobody saw my work. But now it’s lying there, out in the open, awaiting feedback. And I have to go into the weekend with a feeling of dread.

I’m a programmer by trade and a slow, but hard-working, explorative kind of person in general. I love to find out how things work, how they maybe could work better and how to make a system such that it can communicate well with the people maintaining it. Which means that I’d rather spend weeks reworking a legacy system, improving its workings and reorganizing and tidying up its architecture than implementing new features. So I like to do the work that is most often appreciated by other programmers - on a good day (most often people will not really notice … until they do). Keeping a system in good working order. Like a well oiled machine. Notice that end users or features are not necessarily part of that. It’s not like features are uninteresting or unimportant to me, but the how and the why matter to me a whole lot more. Especially the how.

However. Because I get payed for features or bug fixes on features and not for refactoring, removing technical debt or thinking about software architecture, I am not the best programmer for the kinds of tasks my company offers, sometimes, I’m afraid. It sucks to admit, but I am pretty slow, when just measured on delivered features compared to many others. The main reason is that I am unable, maybe also unwilling, to “just deliver”.

I assume that being able to “just deliver” - meaning without refactorings (e.g. to make especially gnarly code comprehensible for me and future programmers) or enhancements of the project’s setup (e.g. to configure an autoload feature as I needed to do today) - is the result of either not seeing the problems (e.g. maybe not being as experienced, yet) or being experienced enough to not care about the problems anymore (e.g. maybe by having had to navigate some portion of legacy code so often, that they just know where to look without the need to make the code more comprehensible).

But I see it everywhere at my work: The people who are good enough to be hired permanently (which I was, too, somehow …) all seem to possess something I don’t. I hesitate to call it more experience or talent. Which is not to say that I am perfect. I do readily admit that I totally have blind spots. And even where I feel confident, I know that I could still learn a lot more. And I also admit that my values are sometimes at odds with what is the bread and butter of my job: just deliver features. On time. Without too much back and forth, in acceptable quality (but nothing more, since nobody is able or willing to pay for it).

So what is it then that makes my colleagues better at this? I think it is a willingness to not care as much. That’s the “just” part of the “just deliver” motto. I don’t mean that derogatory, even though it sounds like it at first. If you are able to not care about technical purity you may be able to deliver faster. You may be able to overlook a less than ideal implementation to have a working version earlier - which can be a good thing.

Apart from being less concerned about technical quality, I think there is a certain soberness in most of my colleagues. No matter how difficult some code may be to read or how tight a ticket’s allowed time budget may be, they seem to be able to not need to reinvent how to do things all the time - like I seem to. They can keep their powder dry. They do not need to go over the top, whereas I seem to go over the top by the drop of a hat. Maybe a reason for this is differences in personality type, sure. But I think that it might also be a difference in approach, which is much more interesting, because I might learn to incorporate that approach.

I think the soberness comes from having somehow learned to respect lines in the sand. Like today (Friday), when I had to hand in part of a feature and a whole lot of refactoring and general enhancements of a new (to me) project that I was introduced to on Wednesday. Another programmer would have not done that. They would have been able to not even seriously contemplate the possibility to do what I did. They would have done a more minimal job, probably cursed a lot, probably complained a lot, but they wouldn’t have spent the better part of three working days delivering two thirds of a feature that was supposed to take around two hours. All the while having to come up with justifications to do so.

So yeah. Other people don’t do that. As much. I’m sure this is a spectrum rather than just extreme me vs. moderate everyone else. That I’m so dismayed by this, is that I might also be considered too old to not have yet learned this already (I’m 37, with more than 5 years of work experience). In one word: I might be too expensive, if I don’t learn to bend a lot more. And quick. That’s enough to make me feel bad. Since it’s the weekend without any feedback coming my way until Monday, the bad feeling is mixed with anxiety.

I can’t really tell how bad it is this time. With the recent worsening of the IT job market and the general worsening state of the economy, I’m extremely unsure of what to expect if this problem continues of worsens over the next 6 month, let’s say. I feel like wasting almost three days after having been repeatedly told since I have been part of the company to pick up the pace (not in so many words, but basically that), be more pragmatic and trying to make smarter choices about which way to take to deliver solutions, I feel pretty gloomy to be honest. I feel defeated by my personality in some ways. I don’t know how long this can go well.

I hope that I will catch myself in time, next time. And for the things that have already been done: I hope I find a way to navigate the probably coming, somewhat difficult discussions ahead. That includes discussions around basically ignoring what I was supposed to do for the last few days and booking my hours in sometimes questionable (if technically justifiable) ways.

One silver lining: There is a chance that the extra work is welcomed after all. I might still be gently scolded, but in some ways taking the time to come up with a better solution and thinking long-term is almost always a positive, of course. So here’s to hoping that the people paying me think so, too. Fingers crossed.

Quick little story: I was working on a relatively big task to duplicate a bunch of data in a system I barely know. Now, the biggest problem here was that the data in question was hierarchical sql data. That makes it tricky because relational tables (rows and columns) are not well suited to make it easy to traverse such trees. You could relatively easily use a migration logger functionality and click through the UI on the backend to record the sql that you need to migrate this data. But this gets messy quick and can go easily wrong. The more data you need to duplicate the more important it is that you do the right thing for an increasingly long sequence of manual actions. Also there is cleanup necessary afterwards, since the migration logger is not optimizing its lightly abstracted sql statements well. On top of that, I could not find a good interface to program my migration against either. It seemed that my options were:

  1. use the UI and record the changes using the migration logger
  2. write what you need using raw sql yourself

I decided to go with option 2. I found a great solution that did not require a nested set approach called a recursive common table expression. I felt like a genius. I also was sitting on my high horse, “How did no one ever come up with a good API for doing common things like: copying this entity over onto this branch? Well, I guess I have to do it myself…”

Even though progress was made to create migration helper functionality, it was slow and I had lots of questions regarding the structure of the data I was supposed to duplicate.

Yesterday my boss took 2 hours of his time to look at the problem with me. He pointed out that what I had been doing for the last three days was unnecessary. Of course, what could be done in the UI in the backend could also be accomplished programmatically. It wasn’t even that hard. It had all the scaffolding for customization of the process I could have ever wanted. This will allow me to write a readable migration in a few hundred lines, with no later cleanup necessary. We solved a big chunk of my problem in no time.

I was embarrassed. Not only did I underestimate my colleagues and the maintainers before me, I also underestimated the system and its capabilities. I felt especially bad, because I thought I had done the right thing, like asking many questions and communicating with the project management why there was such a delay. I thought I was dealing with a hard problem - and in a vacuum it actually was a hard problem - but instead I just didn’t know where to look or what to ask.

It’s a new day and I think I take a bunch of lessons from this:

  1. Ask more directly: Do not ask “Is it guaranteed that every group starts with a node that has itself no parent?”, but ask “Is there an API to copy categories just like the UI has?”. See also: The XY Problem.
  2. Be okay with making such mistakes. I am new in the company. It is almost inevitable that I will run in circles and it may take some time before my knowledge of the system matches my level of general problem solving skills. Because I could have solved the hard problem - which is why I tackled it - I just didn’t need to.
  3. Accept your expertise level. As I said I am new here and at the same time I have programmed for a while so I generally know what I’m doing. This is a great recipe for falling prey to the Dunning–Kruger effect: That is overestimating one’s ability in a particular context. I may be an okay problem solver, but I do not know the new system.

There is a certain inevitability to feeling a bit of shame while learning from mistakes. It’s part of the journey in a field as complex as programming. I’m still learning to embrace these moments, understanding that they are not just hurdles but stepping stones towards greater expertise and confidence.

As I move forward, I remind myself to maintain a balance: to stay humble yet curious, to recognize the boundaries of my current knowledge while also questioning established norms. This experience has taught me that becoming a better developer isn’t just about accumulating knowledge, but also about the wisdom to question, to explore, and, most importantly, to learn from slightly embarrassing fumbles.

So, with these lessons in my toolkit, I am moving on - a little humbler and wiser, a bit more prepared, and as eager as ever to challenge and be challenged. After all, it’s through questioning and exploring that we often find the most innovative solutions and grow beyond our imagined limits.

PHPStorm's keybinding system is ridiculous

It is no news that I am pretty skeptical that PHPStorm is actually a good IDE. I especially question their handling of international keyboards. You see, on an international keyboard like the German one the keyboard shortcuts cmd + shift + 7 and cmd + / are virtually the same, because there is no way to type a / without typing shift + 7.1

To my knowledge the only modern IDE that offers a default keymap for macOs that doesn’t work out of the box with my default international keyboard is Jetbrain’s offering. The reason is that some genius thought to implement key bindings without any understanding what symbols are necessarily typed by using a modifier key. So you end up with a keyboard shortcut like cmd + / that doesn’t work in PHPStorm, but will work out of the box in VS Code.2

P.S.: The problem is old btw.: The offical issue in their bug tracker is 7(!) years old.

P.P.S.: Is this a hard problem? Maybe. The more interesting question to my mind is, though: Why is it a solved problem in all other IDEs? This is table stakes.


  1. By chance you can also type cmd + devision symbol, but that is not the same as a forward slash and is indeed - and this time correctly - its own shortcut in PHPStorm. Try to type that one on a keyboard without a numpad, though - which means this shortcut is useless on all(!) MacBooks - if you don’t have an external keyboard attached. ↩︎

  2. To be fair: The shortcut ends up being displayed as shift + cmd + 7, but at least vsc won’t act like cmd + / and cmd + shift + 7 are completely different key bindings (which they are not). ↩︎

The (Hu)go template syntax is bad.

There is no two ways about it. It’s just bad and often counterintuitive. It’s hard to read, hard to write, hard to debug, hard to fill in blanks. Easy things should be easy and hard things should be possible. I feel like everything is just hard with it - not impossible per se, but often harder then it should be.

I mean just look at this one line:

{{ $paginator := .Paginate (where .Site.Pages.ByDate.Reverse "Type" "post") (index .Site.Params "archive-paginate" | default 25) }}

This does/means the following:

  • $paginator - that’s a custom variable, it needs to be prefixed with a $
  • := - you need to use this operator to declare a variable and assign a value. But later it’s fine to just use = to reassign it.
  • . - “the dot”. It holds the context, which is basically a kind of scope. Why the scope is not implied? I don’t know. I guess if you’d have a this keyword, you’d end up writing this.whatever… and you would still need to differentiate globals and in-scope vars, so I guess that’s better?
  • .Paginate - Inside “the dot” there exists a Paginate function which has to be upper case because that is the way to make a function visible outside of its “package” in go. If you look at the list of available functions in hugo, you will know that a function was exported, but not why. Also a lot of functions were not exported, but you also do not know why. I assume that the lower case functions are all part of hugo’s templating standard lib. But it’s not explained what is going on in the docs explicitly.
  • - A space (yup). Functions and parameters are separated by spaces, so this space indicates the start of the Paginate function call (And not the parentheses in front of the where, as one might think when coming from other languages).
  • ( - The start of the parentheses denotes a nested expression.
  • where - that’s the where function which takes an array and compares a value at a given key using an operator (equals is implied and can be omitted like has been done here) to a value and only keeps the elements that pass the test.
  • .Site.Pages.ByDate.Reverse - The .Site.Pages part is an array of all pages of the blog. The ByDate.Reverse part is used (and available only) on list pages - another detail you’ll have to know - that is pages that have other pages under them in the file hierarchy. AND that the homepage is a special kind of list page. This is a snippet from the home page, so you can use it here to change the order of the retrieved array. Why you can’t retrieve pages in similar manner on non list pages is unclear to me.
  • "Type" - This is the key parameter of the where function. The key is “Type” here. Why is Type capitalized? Maybe it has to because of exports? Maybe it’s some other reason that I don’t know. In any case it is part of the keys of the element that is kept (or discarded). For the pages array an element includes things like a Permalink, a published Date and a also a Type. How do you know what an element includes? Well, it’s hard to say, because the proposed solution to use something like &#123;&#123; printf "%#v" . }} within a range function call that references the pages from the Paginator above only prints garbage. An actual solution is to use <pre>&#123;&#123; . | jsonify (dict "indent" " ") }}</pre> which gives a pretty printed JSON representation of all the available properties within a given context. But you’ll not find this solution in the docs, you have to get lucky and find it in the forum or Stack Overflow.
  • "post" - this is the match parameter of the where function. In other words the value found for the key “Type” must match this value in order to be included in the filtered array.
  • ) ( - we finish one nested expression - which is parameter one of our .Paginate call (the array to paginate) and start another nested expression - parameter two (the page size) of .Paginate
  • index .Site.Params "archive-paginate" - this returns the value at index or key n of a given array. So in this case the value at .SiteParams.archive-paginate
  • | default 25) if index does not return a value the default function can take over and return a default value. I don’t know why this has to be a function. I also find the whole part index .Site.Params "archive-paginate" | default 25 difficult to parse: Is the pipe still part of the index function call? You’ll have to know what pipes are.

Finally, we have parsed the whole thing:

instantiate customVar = 
functionCallThatReturnsAPaginator(
    functionCallThatReturnsAFilteredArray,
    (
        functionCallThatReturnsANumber ||
        functionCallThatReturnsaAdefaultValue
    )
)

And .Paginator itself is just an object that makes it easier to refer to and implement a paged navigation for an array of given elements (posts).

Apart from the unusual syntax I find (Hu)go templates hard to parse, even if I grok them somewhat. (Hu)go’s way to write function expressions, namely using spaces instead of parentheses and commas make the whole line harder to read than it needs to be. Compare:

{{ $paginator := .Paginate (where .Site.Pages.ByDate.Reverse "Type" "post") (index .Site.Params "archive-paginate" | default 25) }}

vs.

{{$paginator := .Paginate(where(.Site.Pages.ByDate.Reverse,"Type","post"),(index(.Site.Params,"archive-paginate") | default(25)))}}

Granted, this is still hard to read, because a lot is happening in this one line of code, but still: I’d argue it’s much easier to parse, because opening and closing parentheses and commas carry much more information than a simple space could. Spaces are also commonly used to align or balance things as has happened around the pipe char. Does this carry semantic meaning? Nope, not in this case!

A visualization of what different symbols of the templating syntax mean in our example. It turns out that the space char carries three differnt meanings: an aesthtic space, a start of a parameter list, a delimiter between parameters.

The space is doing an enormous amount of overtime here and I have yet to see a good justification of muddling the waters like this. The only reason I could see is that you have to balance parentheses, meaning you’re ending up with the line ending in ))). The best part is that you still need parens in any case, you just have to put them around the whole function expression! The real template version safes you two parentheses for the price of a parsing headache. I feel like that’s not worth it.

Let’s move on: It should be super easy to limit the list of pages to only include a certain category of posts, right? Would this have been what you’d come up with on the first try?

{{ $allPosts := where .Site.Pages.ByDate.Reverse "Type" "post" }}
{{ $allDailyDogos := where .Site.Pages "Params.categories" "intersect" (slice "DailyDogo") }}
{{ $onlyDogos := intersect $allPosts $allDailyDogos }}
{{ $paginator := .Paginate ($onlyDogos) (index .Site.Params "archive-paginate" | default 25) }}

So far so good. How about the inverse? You’ll find that there is no way to tell the line $allDailyDogos to simply do the inverse. There is no "not intersect" or whatever. You have to use another function called symdiff:

{{ $allPosts := where .Site.Pages.ByDate.Reverse "Type" "post" }}
{{ $allDailyDogos := where .Site.Pages "Params.categories" "intersect" (slice "DailyDogo") }}
{{ $noDogos := symdiff $allDailyDogos $allPosts }}
{{ $paginator := .Paginate ($noDogos) (index .Site.Params "archive-paginate" | default 25) }}

Symdiff is short for symmetric difference and means here that we want all elements from the $allPosts array that are not part of the $allDailyDogos array. Meaning we are keeping only those posts, which is like filtering for the inverse of the $onlyDogos array from before.

We could have mashed this all into one line to make it totally unreadable, but I think this is instructive and it would have not been anymore readable in other template languages (if they even would’ve been able to deal with this as a one liner). Still:

  • Why do I need to make a slice/array out of “DailyDogo”?
  • Why is there no NOT operator? - it would’ve been nice in two instances: 1. inside the where function to save an extra call to symdiff or 2. as a better (I’d say) alternative to using symdiff, because people think more along the lines of “all but not these kinds of things” instead of “symmetrical difference of these two sets”.
  • Why is it so hard to chain conditions to the where function to the point where you’d rather create two arrays instead of filtering the array down in one step?

Ugh. There is so much in just this one line - and the subsequent slight changes I have done - that I find weird and unergonomic. I hope I could also show that it’s not complete and utter failure to understand what’s going on, either. Sometimes things are just badly designed for the most obvious use cases in order to accommodate fancier goals. I’d rahter have a templating syntax/language that makes things easier. If that makes it more boring: Good. I’m here to improve my blog first and foremost.

Good to see that state management in SPAs is still hard since the last time (3,5 years ago) I used one of these frame works. This time it’s vue3 last time it was react.

When I try to figure how something works in my programming language I often use the service replit. It offers a simple bare bones php environment which is ready to go to test out some stuff, is portable and free to use.

One thing that is slightly annoying is that they only support PHP 7.4 out of the box, but it is very easy to upgrade the php version used to PHP 8. Let’s start with an example:

<?php
$str = "Hello, world!\n";
if (str_contains($str, 'llo')){
  echo 'YUP';
}

This code will not run as is on replit, because the function str_contains doesn’t exist before PHP 8.

Screen Shot 2023 03 09 at 14 39 02

So let’s change that. Click the three dots in the side bar and reveal hidden files:

Screen Shot 2023 03 09 at 14 41 39

Next, open the replit.nix file and change the used php version, like so:

{ pkgs }: {
  deps = [
    pkgs.php //from pkgs.php74
  ];
}

Without needing to do anything else we have instructed nix - the package and config manager underlying much of replit.com’s functionality - to use the latest php package which happens to be php 8.

If we run our little test program now, it’ll work:

Screen Shot 2023 03 09 at 14 47 10

NB: The version of nix on replit is not up to date, so trying to use php82 to get the latest and greatest PHP Version 8.2 won’t work. But php 8.0 is still better than php 7.4

It still is a challenge to find the right words in code reviews, no matter what side I’m on:

As the reviewer I want to give really good reasons, be persuasive but also signal that I know that we live in a contingent universe. If I have knowledge to give I want to explain things well, without coming off as paternalistic.

As the reviewed I want to be open minded, interested, but also being able to challenge things back without coming off as defensive.

IDE Troubles: PHPStorm and VS Code

I work as a programmer for my day job. Right now I am working on two php projects. Coming from Sublime Text but having had the need for more IDE features I came to love Visual Studio Code and made it my home. VS Code is a great editor for PHP development, especially if you use the Intelephense extension. However, the bigger the project, the more it becomes apparent, that the performance of the language server is not that great.

I lived with this for many months now. Yesterday I started working on a new issue and thought to myself: “All of my dev team members use Jetbrains PHPStorm, I should give it a try.”. I had used Jetbrains Intelij Idea for a brief moment a while back and only remembered being fairly unimpressed, but not exactly why I felt that way. Having now tried getting into their PHP IDE product, I have to say my impression has not really changed and now I also remember why.

Although the code intelligence performance is clearly better, it is the small stuff that gets me. The impetus for this post for example was that you can’t use standard shell shortcuts with the integrated terminal: Control + R (backwards history search) being one of my most often used commands.

I appreciated that they offer a keymap for people coming from Code to PHPStorm and as far as I can tell, the commands in it work. But customizing an IDE is more than just having the same default shortcuts for common tasks available in the new app. I have a bunch of my own shortcuts in code that use the pattern CMD + H <Key> (press CMD + H and then <Key>). I use CMD + H, because my last name start with H, so it was (and is) easier to remember. It just so happens that CMD + H is also a system wide shortcut for hiding an application. So when I tried to set up a frequently used shortcut to reveal the currently opened file in the sidebar navigation, CMD + H S the PHPStorm app was just hidden. In other words it did not “shadow” or override system shortcuts when asked. This feels even more arbitrary if we take those two things together: On the one hand standard shortcuts of the shell running inside the integrated terminal are shadowed per default - and this can also not be disabled -  and on the other hand wanting to shadow a system wide shortcut like CMD + H is not possible without remapping the shortcut within the OS. I feel like the IDE is not really playing along, but instead makes me jump through hoops.

These are only two compounding cuts of a thousand that make PHPStorm hard to love in my opinion. I have an especially hard time understanding the design decisions to remove long standing functionality from a shell by shadowing standard shortcuts. It’s just a bad idea. It does explain why my colleagues never use the integrated terminal, but have an extra shell window open to the side, to run terminal commands.

All in all I am very unimpressed by PHPStorm, but also worried: If the performance of Intelephense stays as bad much longer, I might have to look at other options. I know that Netbeans and Eclipse also have PHP features, I just don’t know anyone using, let alone loving these apps for PHP development…

Forward.

Taking a break from microcasting

I recently published a post about my microcasts and I‘m now writing this to tell you I’m going to microcast less. Less frequent and less scheduled. This is even weirder because I just created an intricate shortcut that automatically publishes a post with the right episode title and number without having to look up any of those details, which streamlined the entire process a lot.

But I have noticed that I‘m not as much in the mood at the moment to record my thoughts in this way. The early riser project, for example, was a great McGuffin to get me out of bed, but since I have changed my approach to waking up, it has become less important to me. I don‘t want to record these little episodes for the ramblings alone.

Being a big fan of podcasts like Back To Work and Cortex, I thought I could do a microcast about my work and how I approach it, but I have noticed that I would like to write about these things, not only talk about them extemporaenously. LeadDev is also the microcast for which the ratio of recorded/published episodes is the worst: I take this as an indicator of my wanting to express myself more carefully than I can while walking the dog and having to observe the environment.

À propos the dog: I created my first microcast, the PuppyCast, because I wanted to have a record of the challenges and joys of raising a puppy. I wanted to look (DailyDoogo, which I will not abandon, btw.) and listen back to this important time in my life. But I think that this project has run its course: I don‘t feel the need to publish this every week anymore. It‘s not like nothing changes or that there is nothing to report dog-wise, but it all comes down to the need to simplify and de-schedule my life. And I have episodes for the first six months, which seems like a good place to stop.

Seasons change. And this is how I feel about my microcasting: Right now I‘d much rather write than talk. Looking at my blog I mostly see noise that was produced hastily to satisfy a self-imposed schedule. I will let this stuff go, for now.