Having one big monorepo is an obvious choice for some organizations. But unless you are a FAANG company, you might find it hard to support it in a way that scales. It is well known that those companies invest heavily in the developer experience and engineering productivity, which naturally comes at a considerable cost.
One interesting thing I have observed over time is that open-source solutions tend to struggle with bigger code bases. That’s when custom tooling comes into play — or not necessarily. The truth lies somewhere in the middle.
To give you a clearer picture, cloc says we have almost 400k lines of TypeScript in our frontend monorepo.
That mainly comprises the three apps we develop here at Productboard — Portal, Signup, and our main app. To put things into perspective, Github says that for the last week, 35 authors pushed 110 commits to master (yet to be renamed to
main— two character less to write, yay!) and 322 commits to all branches, excluding merges. That’s huge! We try to stick with small pull requests, which we then squash and add on top of the master with Kodiak — but more about that later.
To support this heavy machine, we have established an organization I’m leading called Frontend Platform. The team’s mission is to empower engineers and enable productivity. Ultimately, we want to deliver value to the customer, and we want to do it right and fast. That sounds straightforward, but it comes with challenges, especially at this scale. On top of that, we strive to deploy to production multiple times a day. So how exactly do we do it?
With tooling we stand. Without it we fall.
We realized soon enough that without a proper way to map dependencies between parts of our monorepo, we end up with one big ball of mud. Without clear boundaries between modules, it was hard for us to allow teams to experiment with patterns and approaches because it was still part of a monolithic application. So, for the sake of consistency, you would have to create a strategy for everyone.
Another logical consequence of not knowing the dependency graph and not building applications from many independent modules is that you always have to run tests, lint your code , basically everything — just name it — against every line of code. Simply put, you treat the whole monorepo as one big project. And trust me, when you are talking about 400k LoC, it takes some time. In darker days, our frontend feature branch pipeline took us almost an hour to finish.
We had the most obvious problems laid out in front of us. To sum it up, we wanted a unified way to introduce modules that would be easily isolated from the rest of the stack via a public API. Secondly, we wanted a way to track dependencies between those modules, so we could ultimately say what modules you are affecting within your PR and thus exactly what the CI pipeline should test, lint, build, and eventually deploy.
First, we were looking in the direction of scaffolding tools like Yeoman, Hygen, and Plop, but we soon realized they wouldn’t be enough for our use case. Essentially, those tools are focused mainly on scaffolding features, but that’s just one part of the story for us. We wanted to get insights into our monorepo — what’s being built and what dependencies those projects have. We scoped on Rush and later on Nx, before finally deciding to give Nx a shot. It’s feature-rich, open to the community, extensible, and based on battle-tested Angular CLI.
We started in Q1 2020. Fast forward a year, and we have registered more than 230 projects — every one of them benefiting from the ecosystem we put in place. We had to use some shortcuts to be able to work within an already-existing codebase, but in general, we were able to leverage Nx for every part of our codebase.
I guess now’s the right time to dive a bit deeper and explain how it really works — and how it affects our engineers on a daily basis. Let me describe a day in the life of our favorite employee, Joe Doe (you know, my colleagues would be mad at me if I picked one person and not them 🤓)
The story of Joe Doe
It’s a lovely day, and I’m working on an exciting new feature. The first thing I’m going to do is spin up the module for it.
I run a command, write the name of the module, and puff! My files are automatically created. Then I realize that I also want to scaffold a storybook for it, so I write another command, and abracadabra! The storybook is ready to use.
Now it’s time to do some magic. I can write another command that runs tests for my module in watch mode, or I can start the storybook I created earlier and start to prototype the functionality there.
Once my code is ready to be integrated, I simply open the PR. The CI pipeline is configured and powered by Nx, so the tooling automatically runs tests, linter, and similar only on the affected projects in the dependency graph. After a few minutes, I get some info from our bots. It might look like this.
We use https://danger.systems/! 🤘
It’s worth noting that I can access the revision under a special URL, so I can quickly showcase it to my team. Speaking about my team, once they do a review, they might see something like this.
This is custom tooling built on top of @actions/github. Do you want to hear more?
Once the PR is resolved, it’s time to merge it and have a cup of tea. We have robots for that as well — I mean for the merging, not the tea-making. I simply add a label, and Kodiak will do the rest. It integrates master to my PR, waits for the CI to pass (so we can be sure that the integration is OK), places my PR into a queue, and boom! After a couple of minutes, it’s merged.
Don’t worry about that merge of master to my branch. Since the PR is up to date with master when it is being merged, the merge commit will be rebased away. With that, our master branch looks slick and clean — no branches, just pure fantasy!
You might ask why we have this step here. Great question! Before, it often happened that we felt our PR was all right. All the tests were passing. You know the drill. But once we merged it to master and the code was integrated, something went wrong with the newest changes. Our master pipeline started to fail, and everybody got blocked by it. So we decided to protect our master and keep integrating code on feature branches.
One great side effect of this approach is that we don’t need to recheck everything during the master pipeline — it has already passed the checks on the feature branch. That means it’s also much faster and can directly proceed with deployment and e2e tests.
So that’s it for today. My PR is merged. Mission accomplished! Now let’s deploy it.
Back to reality
Joe Doe succeeded and delivered — after all, I like the phrase, “what’s not in the master doesn’t count.” His feature is already on staging (thanks to continuous delivery) and exactly one click away from production — auto-deployment to production and rollback is something we currently have on our roadmap along with incremental builds, for example.
Remember at the beginning of the article when I said that before we introduced Nx, the CI pipeline took almost 50 minutes to complete? Well, where do we stand now — 5 mins? 15 mins? It all depends on the PR. Some teams have AVG feature pipeline speeds of around 7 mins, some 15 mins. It all depends on how big their PRs are and how many projects they’re affecting. Now that our architecture scales, it’s all in their hands!
By the way, our PR lead time oscillates around 24 hours, which is one of our north star we measure, and we spotted significant improvement once we integrated Kodiak into our flow.
I’m aware that I’ve described these topics faster than Flash running through Central City, but I hope you got some idea of how it works here at Productboard. Initially, I was thinking about doing a series on this topic — but frankly, folks, I don’t trust you would finish it. Feel free to prove me wrong in the comments!
Last but not least, I would like to recognize @martin_hotell and the rest of the team, who laid the foundations for all of this to be possible! Thanks, Martin!