Terminarchs@slrpnk.nettoTechnology@lemmy.world•Audacity adds AI audio editing capabilities thanks to free Intel OpenVINO pluginsEnglish
141·
9 months agoWhy is that?
Why is that?
Honestly, latency/performance stuff. As in: how do VST synths ensure that they’ll synthesize in time to keep up with the audio buffer, depending on user hardware. I’m asking because I’ve seen/heard countless VST synths fail at this and sound like a clicky mess, and I feel like if I understood how it’s handled in code it would make more sense to me.
Spectrogram*
That’s actually an accurate description of what is happening: an audio file turned into a 2d image with the x axis being time, the y axis being frequency and color being amplitude.
Good luck lmao