Had so much fun last time reading my own CT scans with Claude Code that I’m doing it again. Different problem this time, same impulse.
One side effect of my cancer treatment: left vocal cord paralysis.
Radiation damaged the nerve, so the cord can’t close properly. Air leaks through and my voice sounds like I’m permanently getting over a bad cold. Great at parties.
It might recover. It might not. Nobody can tell me how fast, or how far.
So I did what any reasonable software engineer would do: I built a CLI tool in Rust to measure my own voice deterioration.
Voice quality has well-established clinical metrics — jitter, shimmer, harmonic-to-noise ratio, maximum phonation time. Published thresholds exist. A healthy adult can sustain a vowel for 15–25 seconds. I’m lucky to hit eight. Progress is still progress.
The tool records my voice, runs signal processing, stores the results, and lets me ask Claude or GPT to interpret the numbers in plain English. Claude Code wrote 100% of the Rust. I supervised, which mostly means I said “yes” a lot and occasionally asked why.
The interesting part was pitch detection. Most detectors floor at around 75 Hz — perfectly fine for healthy voices, but vocal cord paralysis can push your pitch down into the 40–50 Hz range. Standard detectors hear that and call it rumble. Rude, but technically fair.
We had to rethink things from first principles: analysis windows, buffer sizes, confidence thresholds. Claude Code didn’t just write the code. It reasoned about the physics of a damaged vocal cord.
I run a session every week. Five minutes. The data accumulates until the trend lines start to mean something.
Last time, Claude Code read my scans and gave me an answer. This time it built the thing that reads my voice — a tool that generates its own data, tracks its own history, and slowly learns what “better” looks like.
All DSP implemented from scratch. No high-level audio library. Claude Code didn’t reach for an abstraction.
It reached for the definition.