<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Welcome, I'm Jakub!]]></title><description><![CDATA[👋 Hey, I'm Jakub – Software Engineer based in Prague. I went full circle from IC to EM and now I'm back. Don't hesitate to drop me a DM and ask for anything!]]></description><link>https://jukben.codes</link><generator>RSS for Node</generator><lastBuildDate>Tue, 14 Apr 2026 13:02:43 GMT</lastBuildDate><atom:link href="https://jukben.codes/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[The Day My Product Leader Out-Engineered Me]]></title><description><![CDATA[We’ve all been there — convinced of our technical brilliance, armed with data, ready to shoot down ideas that contradict our findings. And then… reality checks in.
As a senior engineer, I pride myself on spotting optimization opportunities. Who doesn...]]></description><link>https://jukben.codes/the-day-my-product-leader-out-engineered-me</link><guid isPermaLink="true">https://jukben.codes/the-day-my-product-leader-out-engineered-me</guid><category><![CDATA[Software Engineering]]></category><category><![CDATA[lessons learned]]></category><category><![CDATA[leadership]]></category><category><![CDATA[Product Management]]></category><category><![CDATA[software development]]></category><dc:creator><![CDATA[Jakub Beneš]]></dc:creator><pubDate>Tue, 04 Mar 2025 14:00:09 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/G8wvNzm_fK0/upload/4e06c1b19d1f671ed478de4c1d1ec89c.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We’ve all been there — convinced of our technical brilliance, armed with data, ready to shoot down ideas that contradict our findings. And then… reality checks in.</p>
<p>As a senior engineer, I pride myself on spotting optimization opportunities. Who doesn’t, right? I mean, not these preliminary optimizations, the real ones — impactful ones. Recently, I identified what seemed like a real potential performance improvement in our system. Being the diligent engineer I am, I consulted with a colleague who had previously researched this exact issue.</p>
<p><img src="https://miro.medium.com/v2/resize:fit:577/0*vXgYbAZwHk8yRJC_.jpg" alt="We already tested this. No need to revisit. Change my mind." class="image--center mx-auto" /></p>
<p>We already tested this. No need to revisit. Change my mind.</p>
<p>“Don’t bother,” he said, showing me data that suggested the gains would be negligible. I trusted his expertise — he knows his stuff — so I shelved the idea without a second thought.</p>
<p>Fast forward a few weeks. Our Product Leader swoops in with essentially the same optimization idea. I immediately pushed back, citing the “solid data” that proved it wouldn’t work. I should mention — this isn’t your typical product guy who barely knows what a for loop is. He has serious engineering intuition, which, if I’m being honest, only made me more determined to prove that I — the senior engineer — had already considered and dismissed this idea on solid technical grounds.</p>
<p>But this Product Leader didn’t give up easily.</p>
<p>“Are you <em>absolutely</em> sure?” he pressed.</p>
<p>Something about his confidence made me pause. “Fine,” I thought, “I’ll redo the measurements just to prove I’m right.”</p>
<p>Plot twist: I was wrong. Well, sort of.</p>
<p>The optimization <em>did</em> work—significantly. So what happened? As it turns out, the initial proof of concept had some fundamental flaws. The devil is hidden in the details. My engineering colleague who originally researched this was actually so close, but there was a complexity he missed. His PoC was almost flawless; however, he applied the optimization only in dev mode—in production, his code was counteracted by a different part of the system, a system he overlooked. That's why complexity sucks; it fights back.</p>
<p>Anyway, while he was interpreting the data he was surprised as well. It almost looked that it would work – in dev mode it shows promising numbers, but when he deployed the code in production-like environment, nothing. It looks that almost real-life environment makes the difference negligible. So he dismissed it.</p>
<p>This is where Occam’s razor should have come into play. The simplest explanation wasn’t that our optimization idea was fundamentally flawed — it was that something in our testing approach was masking the results. Ironically, it took an ego check for us to start questioning our assumptions. I ran the experiment from scratch and did it more thoroughly this time.</p>
<p>I reserved one day to look into it, and that's when I realized something crucial: our production environment and development environment differed in one critical component. The irony? This component was originally implemented to make the app faster, but it was actually counteracting our optimization by reverting its function—making everything slower. The very thing designed to speed things up was fighting against our new speed improvements.</p>
<p>This time, I approached the problem methodically. I formed a clear hypothesis, ran comprehensive tests in production-like environments, and confirmed what I didn’t want to believe — the optimization would make things much faster (nearly 50% faster in P90).</p>
<p>Armed with data, I swallowed my pride and went back to our Product Leader. “You were right,” I admitted. The best part? I discovered we could implement the optimization as low-hanging fruit without needing massive refactors or architectural changes. From that moment, everything moved at lightning speed. We quickly shipped the code internally, then to selected customers, and finally to production. Each deployment confirmed our findings — the performance gains were real and significant.</p>
<p>In case you’re wondering — when presenting our findings to the broader audience, we conveniently omitted how we had initially dismissed the idea. That became our little professional secret. We documented the learning together and agreed that haste makes waste. If anything, this experience made our collaboration stronger (and gave us something to laugh about in private).</p>
<h1 id="heading-lessons-learned"><strong>Lessons Learned</strong></h1>
<p>This experience was a big slice of humble pie with some useful takeaways:</p>
<ol>
<li><p><strong>Trust but verify</strong>: Even when working with skilled colleagues, critical findings deserve independent validation.</p>
</li>
<li><p><strong>Titles don’t confer infallibility</strong>: Being a senior engineer doesn’t make me immune to errors or cognitive bias.</p>
</li>
<li><p><strong>Resistance to revisiting “solved” problems can be costly</strong>: I nearly missed a significant optimization opportunity because I was too comfortable with the initial conclusion.</p>
</li>
<li><p><strong>The simplest answer: we tested wrong:</strong> Our Product Leader’s instinct to question our conclusion rather than our theory proved invaluable. This is Occam’s razor in action — the simplest explanation is often correct. Our experiment was flawed, not our optimization theory. Our collective intuition that the optimization should work was right all along; we just needed someone to push us to look past our initial test results and verify our approach.</p>
</li>
<li><p><strong>System complexity is the eternal enemy</strong>: If our development and production environments had been more similar, we would have identified the opportunity immediately. The more moving parts and differences between environments, the harder it becomes to isolate variables and make accurate assessments. Every layer of abstraction or configuration difference adds a potential blind spot.</p>
</li>
</ol>
<p>So, did this humbling experience actually happen, or did I craft it to illustrate important engineering principles? Well, who knows. Either way, the lesson stands: the only thing worse than being wrong is being confidently wrong — especially when you have “senior engineer” in your title.</p>
]]></content:encoded></item><item><title><![CDATA[Analyzing Your Webpack Bundle Like a Pro]]></title><description><![CDATA[Welcome to today’s article, where I’m excited to share a few tips on how to analyze your application’s JavaScript bundle. Specifically, I’ll guide you through identifying the reasons behind bundling certain dependencies. Let’s dive in from the very b...]]></description><link>https://jukben.codes/analyzing-your-webpack-bundle-like-a-pro</link><guid isPermaLink="true">https://jukben.codes/analyzing-your-webpack-bundle-like-a-pro</guid><category><![CDATA[webpack]]></category><category><![CDATA[React]]></category><dc:creator><![CDATA[Jakub Beneš]]></dc:creator><pubDate>Wed, 23 Aug 2023 08:53:52 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/fyaTq-fIlro/upload/95ebe5e1b95fb3d9bc8b18f4c3a34ea0.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Welcome to today’s article, where I’m excited to share a few tips on how to analyze your application’s JavaScript bundle. Specifically, I’ll guide you through identifying the reasons behind bundling certain dependencies. Let’s dive in from the very beginning.</p>
<p>Recently, I was deep in debugging our fairly large React application that we have here at <a target="_blank" href="https://www.outreach.io/">Outreach</a> when I noticed some unusual behavior from React Developer Tools. Upon investigating further, I discovered that React Developer Tools was attempting to connect to a different React renderer. Which renderer, you might wonder? It turns out, it was attempting to connect to the renderer originating from <code>react-test-renderer</code>.</p>
<p>To put it simply, due to certain race conditions, it was randomly selecting either our main React renderer or the TestRenderer. This behavior was definitely unexpected — after all, the TestRenderer should only be utilized in testing scenarios. So, who was importing it? And why?</p>
<p>To start with, I verified that indeed we had bundled <code>react-test-renderer</code> as part of our production bundle. This raised several concerns, not least of which was the additional 28 Kb of download, parsing, and JavaScript execution burden imposed on our users.</p>
<p>As part of our CI pipeline, we utilize the <a target="_blank" href="https://github.com/webpack-contrib/webpack-bundle-analyzer">Webpack Bundle Analyzer</a> to generate outputs. With this tool at hand, I could easily use its built-in search feature to confirm my suspicions. Snap!</p>
<p><img src="https://miro.medium.com/v2/resize:fit:700/0*lcbPwALa8Yii3R1z" alt="Webpack Bundle Analyzer — generated HTML with highlighted react-text-renderer dependency" class="image--center mx-auto" /></p>
<p>Now that we’ve identified the issue, the next step is to uncover how this undesired dependency made its way into the bundle. You might be tempted to simply use the search function in your IDE, and if you’re lucky, you might stumble upon its usage in <code>App.tsx</code>. However, in most cases, it’s not so straightforward. Our monorepo contains over 1 million lines of TypeScript code, which made me quickly realize that a more sophisticated approach was needed. 🤓</p>
<p>Enter <a target="_blank" href="https://statoscope.tech/">Statoscope</a>. All we require are the <a target="_blank" href="https://webpack.js.org/api/stats/">stats data</a> from Webpack. The process is relatively straightforward (although in our case, we had to chain this command with <code>node</code> and <code>— max-old-space-size</code> to avoid running out of memory — a challenge we managed to overcome 😀).</p>
<pre><code class="lang-bash">webpack --json stats.json
</code></pre>
<p>Once we possess our <code>stats.json</code> file (ours was a whopping 2.1 GB), it’s a simple matter of dragging and dropping it into Statoscope — rest assured, it’s a local-first application and no data is uploaded, so don’t worry it takes just a few seconds. With Statoscope, we can then examine why the particular module was required.</p>
<p><img src="https://miro.medium.com/v2/resize:fit:700/1*z03i90GmXfGUNEwV2bF0Aw.png" alt="Statoscope is providing an explanation of what’s importing our the module" class="image--center mx-auto" /></p>
<p>In our scenario, the root cause was clear: <code>react-test-renderer</code> was being required (as part of <a target="_blank" href="https://testing-library.com/docs/react-testing-library/intro/">React Testing Library</a>) within a <code>util</code> file that was utilized by both tests and app code. Because the code wasn’t effectively tree-shaken (given that it’s a third-party dependency and Webpack, which plays it safe, can’t be sure it doesn’t consist of side effects), it unintentionally became part of the bundle itself.</p>
<p>After fixing the issue by simply splitting the file into two where the one required by the app was not requiring this dependency, React Developer Tools was finally happy again! And as a nice bonus, we shaved off 28 Kb of JavaScript.</p>
<p>P.S: Statoscope also provides <a target="_blank" href="https://github.com/statoscope/statoscope">CLI tools</a> that can seamlessly integrate into your CI/CD system, ensuring that no such unwanted dependencies make their way into the production bundle. This serves as a valuable safeguard to prevent such occurrences in the future.</p>
]]></content:encoded></item><item><title><![CDATA[The Ultimate Git Flow: Trunk-Based Development and Stacked PRs for the Win]]></title><description><![CDATA[Hmm, the title sounds a bit clickbaity, doesn't it? However, I would like to take the time to discuss the git flow I'm using, what it entails, and why I believe it is essential for high-performing teams in this article.
As always, I write this articl...]]></description><link>https://jukben.codes/the-ultimate-git-flow-trunk-based-development-and-stacked-prs-for-the-win</link><guid isPermaLink="true">https://jukben.codes/the-ultimate-git-flow-trunk-based-development-and-stacked-prs-for-the-win</guid><category><![CDATA[GitHub]]></category><category><![CDATA[Pull Requests]]></category><dc:creator><![CDATA[Jakub Beneš]]></dc:creator><pubDate>Tue, 31 Jan 2023 22:00:02 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/6p-KtXCBGNw/upload/8606b03347162bab8327c873539a24e2.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hmm, the title sounds a bit clickbaity, doesn't it? However, I would like to take the time to discuss the git flow I'm using, what it entails, and why I believe it is essential for high-performing teams in this article.</p>
<p>As always, I write this article mostly for myself. Not that I would believe that I would suddenly lose my memory and start wondering what I'm doing here. Rather, I continue in my efforts to document what I believe works well, so I have an easier time passing this information around in this format. In the past, I would probably have used Twitter for this, <a target="_blank" href="https://jukben.codes/a-reflection-on-my-social-media-usage-in-the-era-of-elon-musk">but recently</a> I decided that every time I have a big urge to share something bigger, I would write an article about it. So here we are.</p>
<p>I believe there is no point in doing a fancy introduction of Git to you. You most likely know it very well. Git is one of the most popular distributed version control systems, which aims at tracking changes in your source code and thus making collaboration possible at scale. It would be an understatement to say that Git is popular; it is practically everywhere in modern software development. Let me know if you use something different and the reasons for that in the comments (pro tip: in that case, it might be a good idea to start looking for a new challenge; your career might need it).</p>
<h1 id="heading-trunk-based-development">Trunk-based development</h1>
<p>There are lots of opinions on what is the best way of working with Git at small-to-big software product companies. I have a personal answer for that (and it is not that controversial), it is called trunk-based development. You might be using it, you just don't know it is named like this.</p>
<p>This idea is simple: in order to move fast, you need to ship fast, iterate, and do it all over again. (Shameless plug: you might enjoy <a target="_blank" href="https://jukben.codes/lessons-learned-from-building-products-the-power-of-outcomes-over-outputs">my recent article about high-performing product teams and their mindset</a>.) To reiterate what I just said, developers are encouraged to merge small, frequent updates to a "trunk" branch (usually conventionally called the "main" branch). Once the code is merged, it's all in the hands of CI/CD (<a target="_blank" href="https://jukben.codes/scaling-monorepo-to-infinity-and-beyond">I have some opinions on this topic as wel</a>l); tests are run, code is built, and the app is shipped to customers. Everyone loves it. Rinse and repeat.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1675200281024/0169c78e-0518-4abb-b4e7-dce18551dbd3.png" alt class="image--center mx-auto" /></p>
<p>It might look like this: I create a branch with my feature, iterate on it (more about it later) and once it's ready, we do a review with the team. If everything is all right, we squash-merge it to the trunk branch.</p>
<p>There are many nuances to this one though. Firstly, you can technically do a fast-forward merge, where the commit ID would stay the same.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1675200301983/d5b7d78d-28c6-4540-8a19-67532359a524.png" alt class="image--center mx-auto" /></p>
<p>This is how it would look.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1675200437111/c6cfdef9-c6b8-4eb1-9080-2a88b8c28ce2.png" alt class="image--center mx-auto" /></p>
<p>That is good in case you want to optimize for CI/CD runs. It is a good practice to make sure your PR is up to date with the main branch so you can be sure that the integration of your code is flawless. Otherwise, something that looks fine on your branch might be broken once it is put on top of the actual "trunk" branch.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1675200453512/eee76fd6-8227-4fb0-9445-1574854f9f5e.png" alt class="image--center mx-auto" /></p>
<p>Consider something like this. Here, you can see that my feature is missing code from the commit with ID <code>e</code> which might (or might not) be important.</p>
<p>Anyway, back to my case (see the first image) – I tend to prefer <a target="_blank" href="https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/incorporating-changes-from-a-pull-request/about-pull-request-merges#squash-and-merge-your-commits">squash merging</a>. Making sure your commits are atomic (a fancy word for a commit that is self-contained) in a way that is worth keeping them in the history can be tedious, and trust me, if you have a big enough team, it might be hard to maintain consistency. To play it safe, I found that squashing the feature into one commit with a descriptive message is much better than adding 10 commits, most of which are "fix" or "wip".</p>
<p>There is one important point I have omitted so far. In order to do this, you have to separate <em>deployment</em> (which, as we said, should happen every time you merge something) from <em>release</em>. What does this mean? Simply put, you should be free to merge imperfect pull requests as soon as they do not break the whole system, putting everything down. How to do that? Feature flags, labs flags, call it what you want. The idea is that you are in control of what is available to the customer. With a system like this, you can eventually achieve rolling out features based on geography, user cohort, and more. The same goes for a fast rollback in case something goes wrong; you can just turn it off again. Boom. This sets a great mindset where the way forward is through iteration. Not even talking about derisking deployments; deployments will be so frequent that it won't be a big deal at all.</p>
<h1 id="heading-stacked-pull-requests">Stacked pull requests</h1>
<p>There is a second part to what I have shared so far. Hopefully, you got the point with the fast pace of development that trunk-based flow enables. However, there is still at least one important part that we can take a deeper look at: pull requests or merge requests, if you like.</p>
<p>I don't want to explain why pull requests are essential in software development and the culture I'm trying to build as a leader. Not because I'm not passionate about the topic, but because I don't want to bore you to death. However, this is my article. I recently had a discussion where I had to defend pull requests as a source of knowledge sharing and expanding domain knowledge. It's not the only reason we love pull requests. It's also a great tool to make sure we ship the best code we can as a team. What's obvious to one person might be confusing to another. Pull requests are a great place to catch cases like this and make sure the trade-off makes sense, that stuff is documented, and that it is tested. I guess I will write something about code-reviewing culture someday in the future. For now, I guess you get the point.</p>
<p>I wrote a few paragraphs above about atomic commits. To make pull request review easier for your team, I highly recommend switching to atomic pull requests. These pull requests should be small and self-contained, tackling one problem at a time. I also like to introduce either behavioral change or structural change, not both together. There is more to this than meets the eye; in theory, it is simple, but it may sound tedious to juggle all the pull requests and make sure they can be built on top of each other.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1675200472002/414be803-42f7-4f65-86da-b919fdb52d8a.png" alt class="image--center mx-auto" /></p>
<p>Imagine you have two pull requests open. The base branch for the second PR is the first one. This allows you to work with changes from the previous one, but streamline the conversation (and review itself) within the PR only to newly added code because the rest of it is covered in the first pull request (<code>1st-pull-request</code>). With this approach, you can keep building your complex feature and still make sure your pull request reaches the main branch swiftly. Not to mention the fact that smaller PRs are much easier to review. If you're really looking for a thorough review, what's the point in a review anyway? By stacking the pull requests on top of each other, you can resolve them concurrently; once the descendant is mergeable, you can do it, rebase the rest of them, and continue.</p>
<p>But imagine a situation in which someone made a good point about something in your first pull request, so you go and update it.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1675200495886/db8f3842-fa67-4acc-ad7d-a1c36f2da7db.png" alt class="image--center mx-auto" /></p>
<p>Now you have to rebase the second one in order to maintain the integrity of the stack. Isn't that too much work?</p>
<p>In a world without stacking, your second pull request would be blocked until you merge the first one, and that can be frustrating.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1675200501855/4c06046a-818d-460d-82a1-5ad36fbf7087.png" alt class="image--center mx-auto" /></p>
<p>Yes, there has to be another way. You bet there is one.</p>
<h2 id="heading-meet-graphitedev">Meet Graphite.dev</h2>
<p>To my understanding, this flow is quite common in Big Tech; however, I have never worked for one of the mighty list. So, take it with a grain of salt. Though I don't have a reason not to believe it, it is really that productive.</p>
<p><a target="_blank" href="https://graphite.dev/">Graphite</a> is a product that enables just that (and more). It is currently in beta (I have some invites left, so if you are curious, feel free to reach out to me!). Their newly redesigned page does a great job of explaining how it works, so <a target="_blank" href="https://graphite.dev/">start there</a>.</p>
<p>It's mostly two things: a dashboard which offers a slightly different take on Github's Pull Request (with a meme database so you can quickly comment with your favorite GIF). The UI helps you to eventually merge the whole stack with one click, but I rarely want that so I usually stay in Github anyway.</p>
<p>It is the CLI that matters the most. Do you remember the chore I described one must do in order to keep their stacked PR aligned with every commit they do? This heavy lifting is done by the CLI so you can stack and be cool about it.</p>
<p>It is worth saying that Graphite adds a bit of its own terminology to the game. There is "<a target="_blank" href="https://docs.graphite.dev/guides/graphite-cli/familiarizing-yourself-with-gt#definitions">stack</a>," "<a target="_blank" href="https://docs.graphite.dev/guides/graphite-cli/familiarizing-yourself-with-gt#definitions">upstack</a>," and "<a target="_blank" href="https://docs.graphite.dev/guides/graphite-cli/familiarizing-yourself-with-gt#definitions">downstack</a>." In case you use Git directly from the command line, you need to rewire your brain to <code>gt</code> (alias for <code>graphite</code> CLI), which is luckily proxied to Git. Any unrecognized command will be passed to Git. Though, if you are used to Git as I was, it might take some time to adapt.</p>
<p>Here is an example of the workflow to make it more visual.</p>
<pre><code class="lang-bash">gt <span class="hljs-built_in">log</span> <span class="hljs-comment"># this prints current stacks visually</span>
gt add -A <span class="hljs-comment"># add files, passthrough to git binary</span>
gt branch create -m <span class="hljs-string">"my new feature"</span> <span class="hljs-comment"># create branch from currently staged files</span>
<span class="hljs-comment"># ... work on stuff</span>
gt add -A
gt commit create -m <span class="hljs-string">"additional fix which might go in separate PR"</span>
gt branch split <span class="hljs-comment"># splits the stack by commits</span>
<span class="hljs-comment"># ... some time later</span>
gt repo sync <span class="hljs-comment"># pull latest trunk branch</span>
gt branch checkout <span class="hljs-comment"># pick stack you want to work on</span>
gt upstack restack <span class="hljs-comment"># ensure the current branch and each of its descendants is based on its parent, rebasing if necessary.</span>
</code></pre>
<p>Don't worry if you are confused; I encourage you to try it. It's easier than it seems. Anyway, one feature I quite like is that whenever you run <code>gt repo sync</code>, it checks for merged/closed PRs and asks you to remove their local branch. With that, it's always easy to check out only relevant branches (stacks).</p>
<p>Every time you restack your stack, Graphite updates/rebases all the PRs affected by the change under the hood. Reviewers also understand that the PR is part of something bigger, as Graphite automatically references relevant pull requests.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1675030059985/ff747d12-9495-49ee-8931-bedf4195a243.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-alternatives">Alternatives</h2>
<p>Graphite isn't the only option on the market; there's also <a target="_blank" href="https://sapling-scm.com/">Sapling</a> from Meta and <a target="_blank" href="https://github.com/ezyang/ghstack">ghstack</a>. I haven't spent much time with them, but Sapling looks interesting. However, there are some caveats. The great thing about Graphite is that you can easily plug it into your company's workflow without causing any havoc. Pull requests are pretty much normal and easy to review when using Graphite. You can't say the same about Sapling, which has Git interoperability, but it's generally recommended to onboard reviewers as well, which is hard to imagine in my shoes. I'd be curious to learn if someone was bold enough to roll it out in a company with more than 100 engineers (except Meta of course! Haha, you thought you got me, right?).</p>
<p>One important note: all of those are focused on Github. It's not a big deal for me, because for the last 8 years all of the repositories I have contributed to have been there. But it might be a deal breaker for some. Though, the concept should be able to be generalized.</p>
<h2 id="heading-future">Future</h2>
<p>Are you still with me? I hope you are! Graphite is still quite new and I'm curious about how it will shape up. I have to frankly say I'm not getting any extra value from the dashboard; I'd be happy with the CLI only. My secret dream, which I reveal to you as a reward for making it here, is that this CLI could be recreated with Github CLI (<code>gh</code>) using their <a target="_blank" href="https://cli.github.com/manual/gh_extension">extensions' capabilities</a>. This would undoubtedly make a great service to the popularity of stacked PRs, as it would decouple the CLI and the Graphite Dashboard. In order to make sure Graphite CLI can update the pull requests based on your stack, you need to authenticate towards their API (thus you need an account there). However, I believe you should be able to do the same thing by calling the Github API directly from the CLI. Well, maybe in the near future, who knows? If you would like to take a stab and kick it off, let me know; I would love to help.</p>
]]></content:encoded></item><item><title><![CDATA[Lessons Learned from Building Products: The Power of Outcomes over Outputs]]></title><description><![CDATA[In this article, I reflect on my journey as a software engineer and the lessons I've learned from building products. Through my experiences, I've come to understand the importance of focusing on outcomes rather than outputs, and I believe this approa...]]></description><link>https://jukben.codes/lessons-learned-from-building-products-the-power-of-outcomes-over-outputs</link><guid isPermaLink="true">https://jukben.codes/lessons-learned-from-building-products-the-power-of-outcomes-over-outputs</guid><category><![CDATA[essay ]]></category><category><![CDATA[product development]]></category><dc:creator><![CDATA[Jakub Beneš]]></dc:creator><pubDate>Sun, 08 Jan 2023 21:26:44 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1672790683758/6ab39c5b-228c-4cb9-88b8-60ae7751581b.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this article, I reflect on my journey as a software engineer and the lessons I've learned from building products. Through my experiences, I've come to understand the importance of focusing on outcomes rather than outputs, and I believe this approach is key to creating products that truly deliver value to users. In the following sections, I'll discuss the challenges I've faced in my career, the role of mentorship in helping me learn and grow, and the importance of being directly involved in product development in order to solve real customer pain points. By sharing these insights, I hope to provide some inspiration for others who are also seeking to build products that matter.</p>
<p>At the very start of my career, it was all about going solo. I started as a solo developer, doing tiny contracts and barely making more money than I would make if I spent the same time at a local McDonald's. No matter what, I loved that - earning money with something that started as a hobby is very empowering and I feel grateful for it.</p>
<p>I quickly realized that it wasn't a way forward forever - at least not for me. If you go solo, it is important to have strong mentors within reach; it is much easier to learn with great teachers around you. The exercise I like to use to check how I am doing is to ask myself if I ever feel like I am the smartest person in the room. If I do, it means I am in the wrong room and I should move on. It doesn't work well for this corner case, right! Being exposed to other, more senior, developers, and other roles in product development was essential for me to broaden the context; it helped me massively to see the value of considering the long-term impact of my work and striving to solve real customer pain points rather than just churning out code. This shift in mindset has been crucial for me in terms of delivering more effective solutions.</p>
<p>Like many developers and engineers from my country, later on I moved to an agency delivering (usually small) software solutions. And honestly, it was great. Here and there I had ad-hoc mentoring sessions with my peers and I feel quite strongly that a good agency is a pretty good starting point for one's career. Looking back at my younger self, I was able to touch a variety of different tech - implement them from scratch into greenfield projects, see them fail and rebuild them again. I was able to experiment with different languages, frameworks, and services. It would take an entirely different article if I dived more into how development, 10 years ago, was for me. However, that's not what I want to write about. At least not today.</p>
<p>The thing different agencies have in common is the fact that it's usually hard to be working on something personally meaningful. Something you can connect to. Yeah, you might be super invested in some new technology and then the client might be so kind as to have you implement it for them. But sooner or later you realize that if you are detached from real customers, you'll start losing interest and, worse, you will solve the wrong problems. You might spend months building the best real-time architecture just to realize there is little to no collaboration happening and simple polling would do the job.</p>
<p>When I was a child, I developed games for fun; <a target="_blank" href="https://en.wikipedia.org/wiki/GameMaker">Game Maker</a> was my obsession. It was amazing to be able to create games, rules, and universes - whatever I could think of. Maybe also because of this, I realized what I needed was to be directly involved in product development, to be part of something great for the end users, creating emotion and delivering value to them. I know it is tempting to focus only on the technical aspect, but I ultimately think the outcome is all that matters. I'm a missionary, not a mercenary; your users don't care if your Todo app uses Redux or not, they care about getting value from it.</p>
<p>Outcome is the key: it took me a few years in my career to recognize that aiming for outcomes rather than outputs is the key to success and feeling grateful for what you build. Solving some user pain by removing a bunch of code is the nice outcome - I'd even argue that's what I consider the best, and I celebrate every moment when I remove code, especially my own. The difference is that if you are judged by outputs, it is much harder to sell the fact that you removed code; at worst, you might even look incompetent for having written it in the first place. Shame.</p>
<p>If you have never experienced a truly empowered product team, trust me, you are missing out and you should fix it. If you want to read something about this topic, I highly recommend <a target="_blank" href="https://www.amazon.com/INSPIRED-Create-Tech-Products-Customers-ebook/dp/B077NRB36N">INSPIRED: How to Create Tech Products Customers Love</a> by Marty Cagan.</p>
<p>The book describes, among other great things, four big risks in product development:</p>
<ul>
<li><p>market risk (will the product or service be perceived as valuable by potential buyers and generate sufficient demand?)</p>
</li>
<li><p>usability risk (will users be able to easily understand and use the product or service as intended?)</p>
</li>
<li><p>feasibility risk (do we have the necessary resources, skills, and technology to successfully develop and deliver the product or service as planned?)</p>
</li>
<li><p>business risk (will the product or service support the overall goals and objectives of the business and be financially successful?)</p>
</li>
</ul>
<p>In a nutshell, there are two basic buckets where you might end up as an engineer: a feature team or a product team. Your affiliation determines all the rest of the relationships you might experience while being a member of either one or the other group. Feasibility is a concern of engineering, that's clear. Hopefully, you always have a dedicated designer, so you can cross out the usability risk as well. Although good product designers want to seek solutions, not draw borders and buttons someone sketches out of the blue. Where it starts to be tricky is business and market fit.</p>
<p>In a truly empowered team, this is a concern and explicit responsibility of the product manager. On the other hand, in feature teams, it is usually an executive or another stakeholder pushing the feature onto the roadmap with the expectation of a return on investment, expanding the business, or both. The product manager then serves more like a project manager, trying to handle all the parties and eventually helping the project to get delivered, left at the mercy of stakeholders.</p>
<p>You might be wondering what happens in case the thing you deliver is not performing as expected. You guessed right, it will mostly be your problem. They would blame bad design, that it took too long, or that it was underutilized, or all three together.</p>
<p>On the contrary, in the case of empowered product teams, you do the research as a team (I highly recommend taking a look at the <a target="_blank" href="https://www.designcouncil.org.uk/our-work/skills-learning/tools-frameworks/framework-for-innovation-design-councils-evolved-double-diamond/">Double Diamond</a> approach), you propose the solution, and it is your responsibility to properly validate the solution and keep iterating until the problem is solved. The goal is to solve the problem, and learning how to do it well enough is part of the journey.</p>
<p>Of course, nothing is that simple in the real world, and the buckets I described are more like a spectrum that your team might be on. However, there is likely space for both types of teams, especially during different times and company phases. It is essential for you to understand where you stand so you can set your expectations correctly.</p>
<p>One last thought I cannot resist pointing out is team play, a crucial part of product teams. By being too restrictive about the roles and responsibilities of each team member, you may not fully utilize their skills and expertise. In my opinion, engineers should be allowed to participate in activities such as product discovery and strategy sessions, as they often have valuable insights as users of the product and may be able to identify opportunities for improvement that others might overlook. Encouraging cross-functional collaboration can lead to more creative and efficient solutions.</p>
<p>With that said, I wouldn't be surprised if a really good craftsman would prefer to work in an empowered product team where they can work on more than just stacked JIRA issues, prioritized by someone who might no longer be with the company.</p>
]]></content:encoded></item><item><title><![CDATA[A Reflection on My Social Media Usage in the Era of Elon Musk]]></title><description><![CDATA[Recently, when Elon Musk took over Twitter, it made me really reflect on my stance on social media. I wouldn't consider myself to be a great Twitter power user - though I've been on Twitter since 2009. My last approach, a strategy if you will, was to...]]></description><link>https://jukben.codes/a-reflection-on-my-social-media-usage-in-the-era-of-elon-musk</link><guid isPermaLink="true">https://jukben.codes/a-reflection-on-my-social-media-usage-in-the-era-of-elon-musk</guid><category><![CDATA[essay ]]></category><category><![CDATA[social media]]></category><dc:creator><![CDATA[Jakub Beneš]]></dc:creator><pubDate>Mon, 26 Dec 2022 20:35:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1672085168687/0e2c98a7-99a4-4063-a32c-5d1cb8d9cb70.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Recently, when Elon Musk took over Twitter, it made me really reflect on my stance on social media. I wouldn't consider myself to be a great Twitter power user - though I've been on Twitter since 2009. My last approach, a strategy if you will, was to use Twitter as some kind of microblog platform. A platform where I could, mostly for myself, share things I learn or observe so I could use it as a reference in the future. Also mostly for myself.</p>
<p>As you know, things changed at Twitter; I wouldn't say for the better - it's a pretty big shitshow if you ask me. However, that's not really the point. Social media changed, and I'm not sure if it's just me, <a target="_blank" href="https://www.theatlantic.com/technology/archive/2022/11/twitter-facebook-social-media-decline/672074/">probably not</a>, but I got to the point where I could barely stand the polarization and hatred in every tiny bit of information I glanced at. The times when I was scrolling through the timeline of cool tech stuff are gone, now everywhere are just fancy clickbait-ish headlines with little or no value.</p>
<p>What is the point? What am I getting back for my time? I asked myself multiple times. The times when I had fun or learned something that would help me progress in my life on Facebook, Instagram, Twitter, LinkedIn, and other sites I used often are gone. I remember pretty clearly that once Instagram added the Stories feature - I got lost - I kinda didn't know how to use it, or maybe I just lost some narcissistic part of my personality; however, I haven't picked it up till this day. Maybe I'm just getting old.</p>
<p>So here's what I plan to do now - whenever I have the itch to open Twitter, I open Reeder instead. I've been collecting blogs and interesting articles for more than a year now and Reeder does an awesome job of that. However, I was always struggling with finding time to go through the things I saved there. Silly, I know. I'm not trying to set any goals or something like this - I guess there's no need to torture my brain with more numbers, badges, and achievements to compete with. It's more about that approach - whenever I decide to consume something, I should try to bet on something worth it. My father was always saying: “One should finish the book, even though they think it's bad. It's good to create a sense of what's bad, to recognize it later”. He's not wrong, at least not entirely - but I'm trying to internalize that once I know it's bad, I'm just going to drop it. Life is too precious to be buried in poor content.</p>
<p>I'd also like to go further and try to establish some habits to actually create things. To get better at writing, for example. After all, writing is quite a big portion of being a senior+ engineer. This post is a kind of contract between me and future me. To have something tangible for my next reflection. We will see. Until then; see you around, my friends.</p>
<p>P.S: The cover image was generated by Stable Diffusion with this prompt: <code>Abstract oil painting of a man who just signed contract with themself as over-the shoulder shot</code> via <a target="_blank" href="https://diffusionbee.com/">Diffusion Bee</a>.</p>
]]></content:encoded></item><item><title><![CDATA[Scaling Monorepo: To Infinity and Beyond!]]></title><description><![CDATA[Having one big monorepo is an obvious choice for some organizations. But unless you are a FAANG company, you might find it hard to support it in a way that scales. It is well known that those companies invest heavily in the developer experience and e...]]></description><link>https://jukben.codes/scaling-monorepo-to-infinity-and-beyond</link><guid isPermaLink="true">https://jukben.codes/scaling-monorepo-to-infinity-and-beyond</guid><category><![CDATA[JavaScript]]></category><category><![CDATA[repository]]></category><dc:creator><![CDATA[Jakub Beneš]]></dc:creator><pubDate>Tue, 30 Mar 2021 19:35:51 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/unsplash/857R--_CvP0/upload/v1643920785163/WxcrqBd4s.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Having one big monorepo is an obvious choice for some organizations. But unless you are a <a target="_blank" href="https://en.wikipedia.org/wiki/Big_Tech">FAANG</a> company, you might find it hard to support it in a way that scales. It is well known that those companies invest heavily in the developer experience and engineering productivity, which naturally comes at a considerable cost.</p>
<p>One interesting thing I have observed over time is that open-source solutions tend to struggle with bigger code bases. That’s when custom tooling comes into play — or not necessarily. The truth lies somewhere in the middle.</p>
<p>To give you a clearer picture, <a target="_blank" href="https://github.com/AlDanial/cloc">cloc</a> says we have almost 400k lines of TypeScript in our frontend monorepo.</p>
<p><img src="https://miro.medium.com/max/1204/0*eG7EX5CBgw8z3bQu" alt="cloc’s terminal output for our monorepo" class="image--center mx-auto" /></p>
<p>That mainly comprises the three apps we develop here at Productboard — Portal, Signup, and our main app. To put things into perspective, Github says that for the last week, 35 authors pushed 110 commits to master (yet to be renamed to <code>main</code>— two character less to write, yay!) and 322 commits to all branches, excluding merges. That’s huge! We try to stick with small pull requests, which we then squash and add on top of the master with Kodiak — but more about that later.</p>
<p>To support this heavy machine, we have established an organization I’m leading called Frontend Platform. The team’s mission is to empower engineers and enable productivity. Ultimately, we want to deliver value to the customer, and we want to do it right and fast. That sounds straightforward, but it comes with challenges, especially at this scale. On top of that, we strive to deploy to production multiple times a day. So how exactly do we do it?</p>
<h2 id="heading-with-tooling-we-stand-without-it-we-fall">With tooling we stand. Without it we fall.</h2>
<p>We realized soon enough that without a proper way to map dependencies between parts of our monorepo, we end up with one <a target="_blank" href="https://en.wikipedia.org/wiki/Big_ball_of_mud">big ball of mud</a>. Without clear boundaries between modules, it was hard for us to allow teams to experiment with patterns and approaches because it was still part of a monolithic application. So, for the sake of consistency, you would have to create a strategy for everyone.</p>
<p>Another logical consequence of not knowing the dependency graph and not building applications from many independent modules is that you always have to run tests, lint your code , basically everything — just name it — against every line of code. Simply put, you treat the whole monorepo as one big project. And trust me, when you are talking about 400k LoC, it takes some time. In darker days, our frontend feature branch pipeline took us almost an hour to finish.</p>
<p><img src="https://miro.medium.com/max/966/0*sAvmaGYnXTb2NCTT.jpeg" alt class="image--center mx-auto" /></p>
<p>We had the most obvious problems laid out in front of us. To sum it up, we wanted a unified way to introduce modules that would be easily isolated from the rest of the stack via a public API. Secondly, we wanted a way to track dependencies between those modules, so we could ultimately say what modules you are affecting within your PR and thus exactly what the CI pipeline should test, lint, build, and eventually deploy.</p>
<p>First, we were looking in the direction of scaffolding tools like <a target="_blank" href="https://yeoman.io/">Yeoman</a>, <a target="_blank" href="https://www.hygen.io/">Hygen</a>, and <a target="_blank" href="https://github.com/plopjs/plop">Plop</a>, but we soon realized they wouldn’t be enough for our use case. Essentially, those tools are focused mainly on scaffolding features, but that’s just one part of the story for us. We wanted to get insights into our monorepo — what’s being built and what dependencies those projects have. We scoped on <a target="_blank" href="https://rushjs.io/">Rush</a> and later on <a target="_blank" href="https://nx.dev/">Nx</a>, before finally deciding to give Nx a shot. It’s feature-rich, open to the community, extensible, and based on battle-tested Angular CLI.</p>
<p>We started in Q1 2020. Fast forward a year, and we have registered more than 230 projects — every one of them benefiting from the ecosystem we put in place. We had to use some shortcuts to be able to work within an already-existing codebase, but in general, we were able to leverage Nx for every part of our codebase.</p>
<p>I guess now’s the right time to dive a bit deeper and explain how it really works — and how it affects our engineers on a daily basis. Let me describe a day in the life of our favorite employee, Joe Doe (you know, my colleagues would be mad at me if I picked one person and not them 🤓)</p>
<h2 id="heading-the-story-of-joe-doe">The story of Joe Doe</h2>
<p>It’s a lovely day, and I’m working on an exciting new feature. The first thing I’m going to do is spin up the module for it.</p>
<p>I run a command, write the name of the module, and puff! My files are automatically created. Then I realize that I also want to scaffold a storybook for it, so I write another command, and abracadabra! The storybook is ready to use.</p>
<p><img src="https://miro.medium.com/max/1204/0*uFHy1iOC-Qp0KzL_" alt class="image--center mx-auto" /></p>
<p>Now it’s time to do some magic. I can write another command that runs tests for my module in watch mode, or I can start the storybook I created earlier and start to prototype the functionality there.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://nx.dev/getting-started/intro">https://nx.dev/getting-started/intro</a></div>
<p> </p>
<p>Once my code is ready to be integrated, I simply open the PR. The CI pipeline is configured and powered by Nx, so the tooling automatically runs tests, linter, and similar only on the affected projects in the dependency graph. After a few minutes, I get some info from our bots. It might look like this.</p>
<p><img src="https://miro.medium.com/max/1400/0*00D3tEbZxhxl9v1W" alt class="image--center mx-auto" /></p>
<p>We use <a target="_blank" href="https://danger.systems/js">https://danger.systems/</a>! 🤘</p>
<p>It’s worth noting that I can access the revision under a special URL, so I can quickly showcase it to my team. Speaking about my team, once they do a review, they might see something like this.</p>
<p><img src="https://miro.medium.com/max/1400/0*EnS9QoBfFaNiwkYs" alt class="image--center mx-auto" /></p>
<p>This is custom tooling built on top of <a target="_blank" href="https://www.npmjs.com/package/@actions/github">@actions/github</a>. Do you want to hear more?</p>
<p>Once the PR is resolved, it’s time to merge it and have a cup of tea. We have robots for that as well — I mean for the merging, not the tea-making. I simply add a label, and <a target="_blank" href="https://kodiakhq.com/">Kodiak</a> will do the rest. It integrates master to my PR, waits for the CI to pass (so we can be sure that the integration is OK), places my PR into a queue, and boom! After a couple of minutes, it’s merged.</p>
<p>Don’t worry about that merge of master to my branch. Since the PR is up to date with master when it is being merged, the merge commit will be rebased away. With that, our master branch looks slick and clean — no branches, just pure fantasy!</p>
<p><img src="https://miro.medium.com/max/1400/0*oPzDF-zfBGeF9Gyw.png" alt class="image--center mx-auto" /></p>
<p>You might ask why we have this step here. Great question! Before, it often happened that we felt our PR was all right. All the tests were passing. You know the drill. But once we merged it to master and the code was integrated, something went wrong with the newest changes. Our master pipeline started to fail, and everybody got blocked by it. So we decided to protect our master and keep integrating code on feature branches.</p>
<p>One great side effect of this approach is that we don’t need to recheck everything during the master pipeline — it has already passed the checks on the feature branch. That means it’s also much faster and can directly proceed with deployment and e2e tests.</p>
<p>So that’s it for today. My PR is merged. Mission accomplished! Now let’s deploy it.</p>
<p><img src="https://miro.medium.com/max/1400/0*gvcYogHBQnmAhbRS" alt class="image--center mx-auto" /></p>
<h2 id="heading-back-to-reality">Back to reality</h2>
<p>Joe Doe succeeded and delivered — after all, I like the phrase, “what’s not in the master doesn’t count.” His feature is already on staging (thanks to continuous delivery) and exactly one click away from production — auto-deployment to production and rollback is something we currently have on our roadmap along with incremental builds, for example.</p>
<p>If you want to know more about how Nx can affect your developer experience, give it a shot! You can check out this wonderful presentation by <a target="_blank" href="https://twitter.com/juristr">@juristr</a> — <a target="_blank" href="https://www.youtube.com/watch?v=MKTknHqBon4">Speed up! Incremental Compilation with Nx</a>.</p>
<p>Remember at the beginning of the article when I said that before we introduced Nx, the CI pipeline took almost 50 minutes to complete? Well, where do we stand now — 5 mins? 15 mins? It all depends on the PR. Some teams have AVG feature pipeline speeds of around 7 mins, some 15 mins. It all depends on how big their PRs are and how many projects they’re affecting. Now that our architecture scales, it’s all in their hands!</p>
<p>By the way, our PR lead time oscillates around 24 hours, which is one of our north star we measure, and we spotted significant improvement once we integrated <a target="_blank" href="https://kodiakhq.com/">Kodiak</a> into our flow.</p>
<p><img src="https://miro.medium.com/max/1400/0*SwnM39_JbRxGHvsI" alt class="image--center mx-auto" /></p>
<p>I’m aware that I’ve described these topics faster than Flash running through Central City, but I hope you got some idea of how it works here at Productboard. Initially, I was thinking about doing a series on this topic — but frankly, folks, I don’t trust you would finish it. Feel free to prove me wrong in the comments!</p>
<p>Last but not least, I would like to recognize <a target="_blank" href="https://twitter.com/martin_hotell">@martin_hotell</a> and the rest of the team, who laid the foundations for all of this to be possible! Thanks, Martin!</p>
]]></content:encoded></item><item><title><![CDATA[How We Revamped Our RFC Process at Productboard]]></title><description><![CDATA[A year and a half ago, I took the opportunity to build the Frontend Platform team that I’m still leading today. While I did this for many reasons, one of them was particularly striking: To empower product teams to make high-impact changes, enable and...]]></description><link>https://jukben.codes/how-we-revamped-our-rfc-process-at-productboard</link><guid isPermaLink="true">https://jukben.codes/how-we-revamped-our-rfc-process-at-productboard</guid><category><![CDATA[process]]></category><dc:creator><![CDATA[Jakub Beneš]]></dc:creator><pubDate>Thu, 21 Jan 2021 20:24:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/unsplash/YLSwjSy7stw/upload/v1643919804200/awMHo7wHU.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A year and a half ago, I took the opportunity to build the Frontend Platform team that I’m still leading today. While I did this for many reasons, one of them was particularly striking: To empower product teams to make high-impact changes, enable and support innovation, and share our learning across the frontend community.</p>
<p>I could talk for hours about exactly how we’ve executed on this and what the situation looked like when we decided to establish the team. But for the purposes of this article, I want to focus on one practical way we’ve improved our ability to make impactful changes as we scale.</p>
<h2 id="heading-addressing-those-growing-pains">Addressing those growing pains</h2>
<p>A couple of months ago, we realized that our Fronted Architecture meeting (as we called it back in the day) no longer worked. It was great when we were essentially half the size we are right now, but it stopped scaling pretty quickly once we grew. And that’s without going into how switching to a completely asynchronous environment took its toll! Ah, 2020, the year of pushing comfort zones!</p>
<p>I teamed up with my mentor from Plato (Hi, <a target="_blank" href="https://twitter.com/jennielees">Jennie</a>!) to figure out how we could improve the way we share ideas, give feedback, and make changes as the team grows. We decided that it might be time to completely revamp our RFC process, which dates back to the origin of the Frontend Platform team. That’s right, we actually had an RFC process in place, but it never really took off. This time, we had to do a few things differently.</p>
<h2 id="heading-designing-an-rfc-process-that-works">Designing an RFC process that works</h2>
<p>Before, heavily inspired by the OSS community around the Facebook projects (namely React, React Native), we had a similar kind of <a target="_blank" href="https://github.com/reactjs/rfcs/blob/master/0000-template.md">template</a> versioned in our monorepo. But I could count on the fingers of one hand how many times it was actually used. Luckily for us, as we grew, we had another problem to solve — we needed a record of impactful changes that we could refer back to in the future. Creating a culture around RFCs effectively allowed us to kill two birds with one stone.</p>
<p>One problem I saw with git-versioned RFCs was that the process was very heavy and formal. You had to open a PR, commit it, and so on. Also, it hasn’t scaled for our other projects — we have a monorepo with mostly frontend code, but we have other services in separate repositories.</p>
<p>As I thought about the problem, I wanted to avoid overcomplicating things by introducing new tools (a new process with a new tool? That’s a fast track to hell!). And truth be told, we have enough tools as it is. So, given that we are pretty fond of Notion here at Productboard — actually, we love it! — I thought, why not use Notion as our RFC database?</p>
<p>We revamped the old template and simplified it (heavily inspired by <a target="_blank" href="https://www.industrialempathy.com/posts/design-docs-at-google/">Design Docs at Google</a>). We also emphasized that it’s an informal document — no need to rigidly follow the template, just put your thoughts down so everyone can participate and be on the same page. Simple.</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/co5dwsm7mk56r2eh1en2.png" alt="Diagrams are always a huge plus in RFCs. Right?" class="image--center mx-auto" /></p>
<p>Next, we aligned at a leadership level to support this initiative across our teams. In essence, the process was as simple as this:</p>
<p><em>Are you gonna deliver something cool? Awesome! Do you need to align with more than your team? Sweet, now document it! If not, is it a huge change that will potentially affect someone outside your domain? Great, go ahead and document it!</em></p>
<p>Now the last step: How to spread the word? Confluence has its notifications. Notion has a notification system as well, but it might be hard to follow — it can get noisy, and no one really looks at emails. You know what we like besides Notion? Slack!</p>
<p>So we decided to create a simple notification channel in Slack called “n-rfc”. Each time someone adds an “open” document for comment, a new notification appears there. It’s nothing fancy, but it works. And the best part? It’s <a target="_blank" href="https://github.com/productboard/pb-rfc-notifier">open-sourced</a>, so you can use it as well.</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/apg3riehhhemr1uz00ep.png" alt="The message is hard coded for now, but can be easily refactored. Feel free to send a PR!" class="image--center mx-auto" /></p>
<h2 id="heading-the-system-works">The system works!</h2>
<p>If you are wondering how it performs, I can honestly say that it has exceeded my expectations. We established our new RFC process at the beginning of Q3 2020, and so far, we have created more than 100 documents this way. That’s pretty impressive given that we’re an organization of under 90 engineers.</p>
<p>That’s it for today, folks. Now over to you — how do you approach changes in your organization?</p>
]]></content:encoded></item><item><title><![CDATA[One Yarn to Rule Them All]]></title><description><![CDATA[At productboard, we rely heavily on Yarn — a fast, reliable, and secure package manager. For those of you who know the ecosystem, it will be pretty obvious how yarn.lock has helped us improve confidence in our projects’ dependencies. For the rest of ...]]></description><link>https://jukben.codes/one-yarn-to-rule-them-all</link><guid isPermaLink="true">https://jukben.codes/one-yarn-to-rule-them-all</guid><category><![CDATA[npm]]></category><category><![CDATA[JavaScript]]></category><category><![CDATA[Yarn]]></category><dc:creator><![CDATA[Jakub Beneš]]></dc:creator><pubDate>Mon, 03 Feb 2020 23:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1643918935206/qoEQKPpQK.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>At <a target="_blank" href="https://productboard.com">productboard</a>, we rely heavily on Yarn — a fast, reliable, and secure package manager. For those of you who know the ecosystem, it will be pretty obvious how yarn.lock has helped us improve confidence in our projects’ dependencies. For the rest of you, here’s a clue from the official documentation:</p>
<blockquote>
<p>Your <em>yarn.lock</em> file ensures that your package is consistent across installations by storing the versions of which dependencies are installed with your package.</p>
</blockquote>
<p>In other words, this means you can be sure that you are working with the same dependencies (and their dependencies) as your colleagues. The same also applies to the CI systems we are using.</p>
<p>Sounds good, right? Sure! But things aren’t always so straightforward. Different versions of Yarn may produce a different yarn.lock. This happened to me when Yarn introduced the integrity field.</p>
<p><img src="https://cdn-images-1.medium.com/max/2000/0*t7XePd99hpd7IERP" alt /></p>
<p>When this happens, you might end up requesting changes to your colleagues’ PR to upgrade their Yarn accordingly and regenerate the file to keep the diff clean.</p>
<p><img src="https://cdn-images-1.medium.com/max/2000/0*lVwdLkpSwYxi6KJQ" alt /></p>
<p>So what now? Should we update the main Readme with the supported version of Yarn? No! We can go one better…</p>
<h2 id="heading-enforcing-the-specific-version">Enforcing the specific version</h2>
<p>One way to enforce a version within one project is to use the engines field in package.json (see <a target="_blank" href="https://docs.npmjs.com/files/package.json#engines">documentation</a>).</p>
<p>    {
      "engines": {
        "yarn": "1.21"
      }
    }</p>
<p>This allows you to enforce a specific version of yarn for everyone who will run Yarn commands including add, for example.</p>
<p>In this case, when you try to install a new package, you may get this message.</p>
<p><img src="https://cdn-images-1.medium.com/max/2000/0*3bNv9hW5nWUwxxjM" alt /></p>
<p>That’s better. But still, you need to install the requested version manually. Now, let’s imagine that the recommended version will change — you would have to repeat the process again! Frustrating, eh? Don’t worry, there’s a better way…</p>
<h2 id="heading-one-yarn-for-everyone">One Yarn for everyone</h2>
<p>This headline may sound silly but bear with me. The authors of Yarn have been facing this specific issue themselves. Thankfully, they’ve come up with a great solution.</p>
<p>Ladies and gentleman, allow me to introduce the best way to keep your Yarn version aligned across the team: <a target="_blank" href="https://legacy.yarnpkg.com/lang/en/docs/cli/policies/">yarn policies</a>.</p>
<p>What is Yarn policies all about?</p>
<blockquote>
<p>yarn policies set-version offers a simple way to check in your Yarn release within your repository. Once you run it, your configuration will be updated in such a way that anyone running a Yarn command inside the project will always use the version you set.</p>
</blockquote>
<p>And how do I use it?</p>
<p>yarn policies set-version</p>
<p>This command will set the latest stable Yarn as the default for everyone working in the same repository. It’s as simple as that.</p>
<p>A Yarn binary snapshot will be stored within .yarn/releases along with the updated yarn-path in the .yarnrc configuration file. Now, you’ll be using the locally “installed” version of Yarn.</p>
<p>And last but not least, make sure to push these changes to your remote, so everyone within the team gets the same benefits — one exact, locally scoped version of Yarn, no matter what version they have installed. No more yarn.lock deviations! <em>Beautiful.</em></p>
<h3 id="heading-final-touches">Final touches</h3>
<p>If you’re using GitHub, you might be familiar with the language distribution dashboard. This is how our repository looked before we introduced the change I just walked you through. Let’s say you’re using TypeScript, as we do.</p>
<p><img src="https://cdn-images-1.medium.com/max/2000/0*LpcFrWJ37gdYJBoW" alt /></p>
<p>And this is how it looks after I merged my lovely changes.</p>
<p><img src="https://cdn-images-1.medium.com/max/2000/0*U4dPpwnmkey3LY30" alt /></p>
<p>Wait, what? Allow me to explain…</p>
<p>GitHub uses a program called <a target="_blank" href="https://github.com/github/linguist">Linguist</a> to calculate code distribution. And by default, it processes all the versioned files.</p>
<p>Thankfully, you can control this process. Simply edit your .gitattributes as follows to ignore this file from the graph.</p>
<p>    .yarn/releases/*.js linguist-vendored</p>
<p>And that’s it! If it’s not versioned, it didn’t happen!</p>
<p>Kudos to <a target="_blank" href="https://twitter.com/@martin_hotell">Martin Hochel</a> for proofread!</p>
]]></content:encoded></item></channel></rss>