hckrnws
The caveat is that you might end up shaving a yak.
More often than not I end up three or four tasks deep while trying to fix a tiny issue.
I think some people frame Yak shaving as a bad thing but I'm not sure it always is, and often even it is it is bad because you're resolving debt.
The example with Hal is funny, repeatable (I share it frequently), but also the tasks are (mostly) independent. It feels more like my ADHD. They're things that need to get done, easy to put off/triage, and but make doing other tasks difficult so maybe they actually shouldn't be put off?
But there's also the classic example we're doing something is a bigger rabbit hole than expected. Usually because we were too naïve and oversimplified the problem. An old manager gave me a good rule of thumb: however long you think something is going to take, multiply it by 3. Honestly I think that number is too low and most people miss the mark. I'm pretty sure he stole it from Scotty from Star Trek but forgot that even that is fantasy.
Personal I think you have to be careful about putting off the little things. It's a million times easier to solve little problems than big ones. So you have to remember that just because it's a little problem now doesn't mean it'll grow. The danger is that it's little, so you forget about it. The shitty part is that if you tell you boss they get upset at you if you solve it now but you look like a genius if you solve it after it festers. Invisible work...
https://scifi.stackexchange.com/questions/99114/source-of-sc...
There is simply no general recipe for this. Sometimes I put my little tools and libraries in order and then I'm very productive with them and looking back it seems to have been the key enabler to the actual thing getting done. Other times I go dirty mode and just hardcode constants, copy code files under time pressure and looking back it is clear that getting to the same result with the clean approach would have taken months and the benefit for later tasks would be unclear.
I know some are tired of AI discourse, but I found AI can help to sharpen the tools but at the same time I find that my scope grows such that dealing with the tools takes just as much time but the tools have more features "just in case" and support platforms or use cases that I won't often need, but it feels easy enough to just do, but then as I said it still takes long in total.
It's all mostly an emotional procrastination issue underneath it. Though that can go both ways. Sometimes you procrastinate on thinking about the overall architecture in favor of just making messy edits because thinking about the overall problem is more taxing than just doing a small iteration, sometimes you procrastinate on getting the thing done in favor of working on more tightly scoped neat little tools that are easier to keep in mind than the vague and sprawling overall goals.
I used to work as a physical engineer and a common task is "where's that tool?" People leave things at their work station and they float around and well... you can't keep track of things you can't see.
Manager finally got fed up (yes, he was the biggest offender lol) and we organized the whole shop. Gave every tool a specific place. Required tools to be put back. But it actually became easier to put back because everything had a home and we made it so their home was accessible (that's the trick).
Took us like a week to do and it's one of those things that seemed useless. But no one had any doubts of the effectiveness of this because it'd be really difficult to argue that we didn't each spend more than a week (over a year) searching for things. Not only that, it led to fewer lost and broken tools. It also just made people less frustrated and led to fewer arguments. Maybe most important of all, when there was an emergency we were able to act much faster.
So that's changed my view on organizing. It's definitely a thing that's easy to dismiss and not obviously worth the investment but even in just a year there's probably a single event that is solved faster due to the organization. The problem is you have to value the money you would have lost were you not organized. It's invisible and thus easy to dismiss. It's easier because everything else seems so important. But there's always enough time to do things twice and never enough time to do it right.
What looks like "wasting time on procrastination" may be actually "spending time on thinking". Thinking takes time.
Making messy edits is a bet on previous code quality. If you have paid off enough technical debt, you can take another "technical loan" and expect the rest of the owl to still function despite the mess being introduced. If things are already messy, there's always a risk to make the fess incomprehensible and failing in mysterious seemingly unrelated ways, with the only way to fix it being to git reset --hard to the prior point, and do a more right thing. But the time would have been wasted already.
My approach is usually to timebox it, and cut the losses if an acceptable solution is not yet in sight when the time runs out.
[dead]
I don't like that 'yak shaving' has degenerated into a synonym for boondoggle.
Some explanations of yak shaving split it into a complex form of procrastination and also necessary annoyances - friction - obstacles.
Sharpen your Tools often falls into the latter category, and it's always useful to question whether those 'necessary annoyances' are actually necessary.
It is, like you say, not always necessary to tackle those annoyances right now. But it is a situation where both the Campsite Rule and the Rule of Three have some domain. As a person whose entire job is about writing code to replace tedious and error-prone human tasks, you need to interrogate yourself any time you start thinking, "This is my life now." Because if anyone has the power to say 'no', it's us.
It's always worth spending 12-15 minutes most times you do a task that you have to do over and over again in service of trying to reduce the task from ten minutes to five or to zero. The reward for engaging in the task more fully rather than putting it off until it has to be done is that you're working toward a day when maybe you don't have to do it at all (you've automated it entirely or you've made it straightforward enough to delegate).
Hal's example is so funny because he's using both arms to scoop in everything from Column A and Column B at the same time. Everybody gets a laugh. A couple of those tasks actually had to be done. A couple could have gone on the shopping list.
I always saw it as being multiple subtasks deep into a problem, or opening too many parentheses. Yak shaving does not imply that you're wasting time, only that the task involves solving problems that feel remote to your initial objective.
For example, I want to use ES6 modules on my website, then esbuild to compile them. However masonry.js breaks it, and instead of fixing it, I decided to get rid of it, but that breaks the layout of the /guides page, and while I'm there I might as well reorganise the list of guides.
So now I'm on week 2 of the switch to ES6, but I ended up redesigning a page, writing a bunch of tests, fixing unrelated UI bugs, making a few UX fixes, making changes to the static site generator, etc etc.
I get to do that because I'm self-employed and thinking long-term, but if I was at $PREVIOUS_EMPLOYER doing sprints, my boss would be wondering why I spent an entire sprint on this simple task.
> I don't like that 'yak shaving' has degenerated into a synonym for boondoggle.
What do you mean "degenerated"? The term was always a synonym for procrastination and slacking off. It's just that in some cases the procrastinator/slacker argued otherwise.
https://web.archive.org/web/20210112174206/http://projects.c...
> You see, yak shaving is what you are doing when you're doing some stupid, fiddly little task that bears no obvious relationship to what you're supposed to be working on, but yet a chain of twelve causal relations links what you're doing to the original meta-task.
What do you think a “causal relation[ship]” means? It means need, not avoidant behavior.
Aside from the origin, there're situations in which you need to somehow shave the yak.
Yes, it's about procrastination, but not of the task at hand. You procrastinate in some older task that's really blocking what you need to do now.
It's chain procrastination. Oldest task blocks older task that blocks old task that blocks current task. It's evil because it overflows the task planning buffer. Also you get used to say nah when you start to think in a task in that general direction.
Maybe you should shave the fricking yak already. Or maybe you should use fake yak hair, idk.
The thing about the Mikado method is that you’re taking what from your perspective is a top down task and flipping it to bottom up. Which is for instance more amenable to refactoring, which is a bottom up task.
Sometimes when you get to the bottom you discover a shorter route backup to the top. The trap is that since you “already wrote the code” is seems a shame to delete it. But that code hasn’t been reviewed or vetted and “code is not the bottleneck”. You really do want to delete it because there’s a new version that’s 1/3 the code, and touches less of the existing system, and so will take less work to review and vet.
I knew which vid's it gonna be before even clicking. Still hilarious.
Relevant XKCD: https://xkcd.com/349
This comic definitely speaks to me on a deep emotional level, but at the same time one of the things I like so much about computers is they're essentially unbreakable.
Not that you can't get one into a non-working state, that is, of course, trivial but with the lone exception of deleting data, you can always restore a computer, the only tool being needed is some kind of boot disk.
(Compare that to breaking a literal hammer, you'd need a pretty specialized set of tools handy if you wanted to actually restore it)
Ah; if only this was really true. You can certainly get a computer into a permanently bricked state, especially an embedded device. Even a modern x86 machine can be basically toasted by a bad firmware update.
Comment was deleted :(
And perhaps less well known to the Hacker News crowd, relevant Malcom in the Middle: https://www.youtube.com/watch?v=5W4NFcamRhM
That’s the same video (but in a higher quality) as in the grandparent comment.
This only happens when the tools have become so neglected that every single one is broken. You should still take the time to pay down that debt, and in the process learn the lesson to pay the debt in smaller chunks in the future.
You are going to pay it anyway, its not an "if" its a "when"
Weird. I happen to be watching Malcolm in the Middle and I find a link to Malcolm in the Middle
LOL! Well, somebody's gotta shave it!
I'm nearly 30 years into my career and I feel like the tools today are so broken that if I was going to "fix" them that's all I would ever be doing.
I write a bunch of my code on Linux with Clio and for the last several years that "tool" has gotten buggier abd buggier. Sometimes there's a similar issue with breakpoints not breaking.
But I just can't be bothered anymore. If something doesn't work out of the box it doesn't work and I simply move on and find another way around the issue. Life's too short to fix other people's code too.
Engineering is a continual lesson in axe-sharpening (if you have 6 hours to chop down a tree, spend the first 4 sharpening your axe).
My favorite framing, from Kent Beck: “first make the change easy, then make the easy change.”
I recently got assigned to enhance some code I've never seen before. The code was so bad that I'd have to fully understand it and change multiple places to make my enhancement. I decided that if I was going to be doing that anyway, I might as well refactor it into a better state first. It feels so good to make things better instead of just making them do an extra thing.
Just be careful that 'better' doesn't just mean 'written by me'. I've seen that a lot too
More often than not I've seen this be the case. Refactoring as "rewrite using my idiomatic style, so that I can understand it", which does not scale across the team so the next engineer does the same thing.
This approach is also what I'm still missing in agentic coding. It's even worse there because the AI can churn out code and never think "I've typed the same thing 5x now. This can't be right.".
So they never make the change easy because every change is easy to them... until the lack of structure and re-use makes any further changes almost impossible.
This is a great observation. I've noticed the same pattern with AI-generated code and deployment configs. Ask it to set up a Node.js service and it will happily write a new PM2 ecosystem file every time rather than noticing you already have one with shared configuration.
The "make the change easy first" mindset requires understanding what already exists, which is fundamentally a compression/abstraction task. Current models are biased toward generation over refactoring because generating new code has a clearer reward signal than improving existing structure. Until that changes, the human still needs to be the one saying "stop, let's restructure this first."
I don't think you spend all 4 hours up front, friend.
In my experience you're going to want a sharp axe later in the process, once you've dulled it.
Not sure if that ruins the analogy or not.
I have mixed feelings here because on one hand I prefer the “axe” when programming (vim with only the right extensions and options). But for trees… chainsaws are quite a bit easier. Once it is chopped down, maybe rent a wood splitter.
[dead]
Most of my colleagues are content to spend 50 hours chopping up the tree with a pipe. We don't have time to spend making things work properly! This tree has to be finished by tomorrow! Maybe after we've cut up this forest, then we'll have a bit of spare time to sharpen things.
As Charlie Munger used to say “show me the incentives and I’ll show you the outcome”.
What are the incentives for these developers? Most businesses want trees on trucks. That’s the only box they care to check. There is no box for doing it with a sharp axe. You might care, and take the time to sharpen all the axes. Everyone will love it, you might get a pat on the back and a round of applause, but you didn’t check any boxes for the business. Everyone will proceed to go through all the axes until they are dull, and keeping chopping anyway.
I see 2 year old projects that are considered legacy systems. They have an insurmountable amount of technical debt. No one can touch anything without breaking half a dozen others. Everyone who worked on it gets reasonable rewarded for shipping a product, and they just move on. The business got its initial boxes checked and everyone who was looking for a promotion got it. What other incentives are there?
It's not about incentives; it's just bad management. As you said, the business just wants trees on trucks, so good management would realise that you need to spend some time sharpening axes to get trees on trucks quickly. It just seems to be something that a lot of software managers don't get.
I don't think every company is like this though. E.g. Google and Amazon obviously have spent a mountain of time sharpening their own axes. Amazon even made an axe so sharp they could sell it to half the world.
Early on in Amazon’s history (long before same day shipping), they added a feature that would tell you, on a product page, whether you had recently bought that same product. The metrics spoke loud and clear: it caused purchase count to go down. Human common sense about the customer’s experience overruled the data and they have some variation of that feature to this day. That’s the “customer obsession,” but unfortunately most businesses only copy the “data driven”.
There is some amount of time to spend on sharpening that, if you spend either more or less time sharpening, net amount of trees on trucks goes down. Smart businesses look for that amount. Really smart businesses know what the amount is, and make sure that they spend very close to that amount of time sharpening.
Indeed. My point is that that right amount is waaaaay more than most people think it is. At least in my experience.
I think part of the problem is people get... I guess "speed blindness". When stuff is taking ages they just think that's how long it takes. They don't realise that they could be twice as fast if they spent some of their time fixing & improving their tooling.
You semi-regular reminder that Kent Beck was one of the XP brain trust behind the Chrysler Comprehensive Compensation System disaster.
https://en.wikipedia.org/wiki/Chrysler_Comprehensive_Compens...
It's not always that, uh, clear-cut...
Sometimes sharpening the axe means breaking it completely for people still trying to cut down trees on WinXP, but you don't know that because you can't run those tests yourself, and grovelling through old logs shows nobody else has either since 2017 so it's probably no big deal.
Sometimes it's not clear which part is actually the cutting blade, and you spend a long time sharpening something else. (If you're really unlucky: the handle.)
Excellent advice. I try to follow it in my daily work, with some success.
Excellent follow-up advice: now stop fixing your tools, and go fix your actual problem instead. I try to follow it in my daily work, with noticeably less success.
> The very desire to fix the bug prevented me from seeing I had to fix the tool first, and made me less effective in my bug hunt
Kenneth Stanley's book "Why Greatness Cannot Be Planned: The Myth of the Objective" is dedicated to this phenomenon
biggest lesson i learned here: all-in-one tools that promise to handle your whole workflow are almost always worse than stitching purpose-built ones together.
i tried every no-code automation platform (make, airtable, n8n) for a content pipeline i was building. they all break the moment you need to run it more than twice a day at any real scale. weird rate limits, silent failures, state management nightmares.
ended up just writing python scripts that call APIs directly. less "elegant" but infinitely more debuggable. the fix-your-tools moment for me was realizing the tool itself was the problem -- sometimes fixing means replacing with something simpler and more transparent.
I haven’t tried n8n and similar tools but I always had the suspicion that they wouldn’t be that good. Would you say there are scenarios where tools like n8n would be better to use than Python and calling some APIs?
And this is why I just love Airflow so much.
I’ve run into this a lot. Sometimes fixing a small friction point in a tool saves hours later, but it’s easy to fall into endlessly tweaking instead of actually finishing the work. The hard part is knowing when the tool is “good enough” and moving on.
If you like what you just read you should probably never install Emacs.
You're welcome.
Tools exist to be an energy/effort multiplier, so it's pretty intuitive that increasing that multiplier will make it easier to get more done.
In practice it's pretty difficult to find the balance between yak shaving and piling in unnecessary manual labour by just trying to do the work with existing (possibly poorly fitting) tools.
If you're planning to stick with your current tools for a long time, each 1% improvement compounds massively over time, so that balance is probably much closer to yak shaving than most people might realise.
Using the debugger to understand/read code is invaluable. Seeing live stacks is so powerful compared to static analysis.
I'm not convinced. At times it can be valueable, but at times you can go around in circles, changing checking variables/break points all the time, but never finding the problem. Often thinking about the problem and what is important is what you need. Playing in the debugger is fun and feels like progress, but it can just be a distraction from understanding the real problem.
I'm not completely against debuggers, but in my experience they only are useful either to get the trace of the problem when it first occurs and then use static analysis until you have a theory the debugger can prove/disprove - but only prove/disprove that theory don't keep looking: you will feel productive but in fact be spinning circles
As with most continuous arguments in SWE, it really depends. I used to do a lot of debugging of random (i.e. not written by me) bioinformatics tools and being able to just fire up gdb and get an immediate landscape of what the issues were in the program was just invaluable.
Times when I was more familiar with the program or there were fewer variables to track it was less helpful
I’m talking about using debuggers not even to debug, but to familiarize yourself with the codebase and gain general understanding.
Measure of progress for me is formulating and answering questions. Sometimes trying to answer a question leads to formulating sub questions.
This is the reminder I needed. For some projects the python LSP I am using in Neovim just breaks sometimes. Always so frustrating when I start fuzzy searching instead of just jumping to a declaration or restart it.
It's not just the tools, it is your tests. Most times you encounter and fix a bug, your first question should be 'Why didn't my tests catch this?'
Yes, but the answer depends on the bug. 100% test coverage leads to brittle tests, when any change leads to many broken tests, and fixing them is like repeating the change multiple times.
I would fix my tools but apparently my CPU is too old to run the Rust debugger which needs SSE4.
My friend once told this joke:
> "A good programmer, when encountering a debugger bug," he paused, cleared his throat, and said solemnly: "should immediately drop the program they're debugging and start debugging the debugger instead!" The auditorium once again erupted in thunderous applause.
I aim for the Boy Scout rule - always leave things better than you found it. It’s always a balance and you have to not lose the forest for the trees. Always ask what is the end goal, and am I still moving forward on that.
When I find something wrong with my tools I file a report or submit a fix.
Good lesson! Can be tempting to fix the problems up the chain especially if the problem might happen again in the future. It depends on how much time, attention, and number of steps up the chain. Sometimes a work around keeps you moving forward but you miss some interesting (rare? learnings).
If you're still dipping your toes into an LLM world, this is an excellent place to begin. I helped with a deploy at work the other day, we have some QA instructions (Notion). I pointed the LLM at one of the sections, asked it to generate task list for each section, and once that looked good, had to convert the processes into a set of scripts. The latest models make short work of scriptable stuff that you can use for debugging, testing, poking, summarizing, etc.
This is the seventh principle from 7 habits of highly effective people: sharpen the saw
My version of this is ‘always be toolin’, but then of course one must use judgement lest it be better to just get on with it.
Ugh, this brings on flashbacks to when I had to work with Ruby, and the *** debugger would break with every single release. The RubyMine IDE that 45% of the company used was based on some bizarre custom Ruby gems to enable debugging, and that crap would take a month to be fixed by JetBrains. 10% used VSCode where debugging would sometimes work and sometimes not.
That's why this course is so important: https://missing.csail.mit.edu
> So I fixed the debugger (it turned out to be a one-line configuration change)
That line links to the commit, which adds
.withDebug(true)
to an invocation of GradleRunner in a file named AllFunctionalTests.kt in the krossover project.My question is:
Why can the software choose whether, when I run a debugger on it, the debugger will work?
It can't, of course, so what's going on?
the tests are validating the plugin by executing actual builds in an isolated/temporary gradle project. debugging doesn't work out of the box because it's another process
https://docs.gradle.org/current/userguide/test_kit.html#sub:...
I also clicked through to that and was similarly confused. Not a Kotlin dev but this doesn’t really seem like fixing your tools? More like understanding them properly. I wouldn’t call a configuration change like this “debugging the debugger” as another comment mentioned.
I’d also like to know the answer to your question about what is going on. I know Java and maven but not kotlin or gradle, but wouldn’t a debugger be interfacing more at the JVM level?
yea, good point. The debugger should be able to say up front. "This code isn't debuggable" or some warning to that effect.
"Give me six hours to chop down a tree and I will spend the first 4 sharpening the axe"
I always liked "The craftsman who wishes to do his work well must first sharpen his tools" by Confucius
[flagged]
hello, it's me the Language Fairy.
Sometimes when people use an expression to convey an idea concisely , the details of the imaginary scenario within the expression are less important than the concept being expressed (just so long as the general shape of that scenario fits the thing being discussed).
To be more particular, the exact time it takes to sharpen an ax and chop down a tree are not important here.
people are so far removed from the ax that they don't realize my point. I'm sure the people Lincoln was talking to had more knowledge and so would get a different picture from the modern reader.
OP here, thanks for submitting!
hey, the idea of Krossover is actually dope! my sole question is, why does it exist?
I understand that one might call Rust from Kotlin for performance reasons (I do that often, Mozilla does, some others too), but Kotlin from Rust? where would it be useful?
no snark or subtext here, I'm genuinely curious
Calling Kotlin from Rust (and other languages) is useful when you want access to an existing Kotlin codebase and would rather avoid creating a full-blown port. I guess most people don't do things like this because creating bindings for languages that are not C (or C-like) is usually cumbersome. Krossover is trying to fill that gap for Kotlin. Does that make sense?
pretty much!
I'm still curious about case studies. I can imagine that something has SDK for Kotlin but not for Rust, yet outside of that case, technical benefits are not yet obvious to me.
You're welcome! Love the article, I hope you write more.
How'd you get the notice that this was submitted so quickly?
I got a notification through F5 bot (https://f5bot.com/)
Good to know, thanks!
If you watch your refers you'll see HN pretty easily. Could be even setup as a notification.
Also, FYI: Claude is very good at fixing tools
That's what I actually used to fix this one! I'm not too deep into the JVM ecosystem, so I gave Claude a try just in case... and it fixed it :)
Btw I didn't mention it in the blog post, because I think that would have derailed the conversation (after all, the point of the article is not "use LLMs", but "fix your tools"). In any case, I agree that LLMs can make it easier to fix the tools without getting side-tracked.
"first we shape our tools, then our tools shape us"
It’s 2026. Just use AI for debugging. At least try it. This could have been solved in 15 minutes.
[dead]
This comic ("Is it worth the time?") is categorically incorrect, because it assumes a static system. Instead, when you make something take less time, you can do it more often. Sometimes, much more often. See: Jevon's Paradox. You would never have known that CI/CD was a useful strategy since you didn't optimize your build because you only did a full build when you released.
[dead]
Next time use AI.
Funny you got downvoted when the OP admitted that's what they did.
Crafted by Rajat
Source Code