Love handy. I use it too when dealing with LLMs. The other day I asked chatgpt to generate interview questions based on job description and then I answered using handy. So cool!
I use Spokenly with local Parakeet 0.6B v3 model + Cerebras gpt-oss-120b for post-processing (cleaning up transcription errors and fixing technical mondegreens, e.g., `no JS` → `Node.js`). Almost imperceptible transcription and processing delay. Trigger transcription with right ⌥ key.
I use the Raycast + Whisper Dictation. I don't think there is anything novel about it, but it integrates nicely into my workflow.
My main gripe is when the recording window loses focus, I haven't found a way to bring it back and continue the recorded session. So occasionally I have to start from scratch, which is particularly annoying if it happens during a long-winded brain dump.
I built my own open-source tool to do exactly this so that I can run something like `claude $(hns)` in my terminal and then I can start speaking, and after I'm done, claude receives the transcript and start working. See this workflow here: https://hns-cli.dev/docs/drive-coding-agents/
There are a few apps nowadays for voice transcription. I've used Wispr Flow and Superwhisper, and both seem good. You can map some hotkey (e.g., ctrl + windows) to start recording, then when you press it again to stop, it'll get pasted into whatever text box you have open
Superwhisper offers some AI post-processing of the text (e.g., making nice bullets or grammar), but this doesn't seem necessary and just makes things a bit slower
I do the same. On Mac I use macwhisper. The transcription does not have to be correct. Lots of times it writes the wrong word when talking about technical stuff but Claude understands which word I mean from context