2026-04-03 hylaeus
I was curious about my overall time investment in Hadron so in February I installed some time tracking software on my laptop.
I acknowledge there's some grind culture here for what is an unpaid and obscure personal project. The tracking software is quite invasive because it has to track what you're doing in each application in order to automatically distinguish work from play. I didn't want to punch a clock when working on Hadron, but that means I have consented to let a company collect data about every activity I perform on my computer.
According to the tracking software, in March I worked on Hadron for a bit over 42 hours. Most of that work happened earlier in the month, I've been struggling with motivation more recently. In studying the tracking data, I find that I could probably use some support on focus and attention. Thankfully, I'm going through a diagnostic process for ADHD and Autism right now, so there might be something to learn there that could support me.
I've started working on the LSP features in Hadron, and although I haven't made significant progress in terms of committed code, I've learned a lot about asynchronous Rust, and written and discarded a number of prototypes.
LSP support, specifically the low-latency interactive language features, caused me to revisit a lot of design decisions in Hadron. Most of my thinking was around designs that might help to elegantly reduce the amount of recomputation that must occur when analyzing incremental changes to a file.
For example, if the user is typing a few characters to extend the name of a variable, say from var a to var abc, and that is in the middle of a few thousand line file, how can we efficiently
determine what needs to be recomputed? Clearly we shouldn't need to lex and parse the entire file,
but are there simple heuristics that could limit the scope of the repeated work?
I looked at rust-analyzer, which uses an interesting compute cache layer called salsa. It's interesting to consider caching from an architectural perspective, because salsa requires a bit of formalism in terms of inputs and outputs from functions in order to leverage the library effectively. I didn't want such a heavyweight dependency, and it feels like overkill for my needs, but it was fun to read about.
I had an instructive conversation with Scott Carver, the author of the current vscode-supercollider LSP plugin. I was surprised to learn that even with the LSP running in sclang Scott hasn't had any performance issues, even when working on larger-scale projects.
SuperCollider isn't nearly as complex as a large Rust project so hearing Scott's experience with the current LSP and performance made me feel that I'm doing some premature optimization. It's time to get some baselines and benchmarks in place, to establish some norms about functionality and performance.
And then I just kinda.. ran out of gas. I'm recovering from major surgery, and still dealing with some pain and fatigue, and it finally caught up with me in the latter half of March. So I'm taking the time I need to work on getting better. Hopefully torwards the later half of April I'll feel well enough to start to tackle some of the Hadron work again.