Ooooh. If I stream the JSON from the speech recognition API (or see if just keeping track of timestamps is enough), actually, I could calculate the words per minute from the last 5 minutes of data and then update the modeline or set the buffer background to remind me to slow down if I’m over a target WPM, since otherwise I tend to be around 190. And maybe I can investigate piping system audio as another channel or to another process and display that in another buffer so that I can live-transcribe the other person’s side of the conversation…