FabFilter User Forum

Moving FabFilter to GPU processing?

I have dreamt about utilizing GPU power for audio plugins for years, but it wasn't possible before, now this company called BRAINGINES SA claims that they have unlocked this potential, and they are offering to unlock this ability for the plugins of any company that requests it.

I would love to hear what the folks at FabFilter have to say about this.


Here's the developers' website: www.braingines.com

Ibrahim

I just want to clarify that I know that FabFilter's plugins are already super CPU efficient and low-latency. However, this company claims a latency of 1ms, which wasn't possible with audio processing through GPU before due to the time it takes to transfer audio back and forth. I could see GPU processing come in handy when oversampling is set to x32 on multiple plugins at once. I'm just curious to hear reactions from the FabFilter team.

Ibrahim

You can already process audio on the GPU via CUDA or if you smart enough via a shader. But the applications for it are very limited in audio industry and the simple management of parallelism or tranfer of data to the GPU already takes longer than the process itself. Their "survey" to get the SDK also shows what they're into: a share of the income and maybe even exclusivity. I did the survey yesterday as soon as I read it and neither got the SDK nor a simple information mail yet. For now I would read it as mostly marketing hype and they've not really presented any examples.

Phil

This is certainly possible, but it seems impractical to me if every plug-in starts to communicate with the GPU independently. The best way to implement this would be if the host organizes the GPU communication in some way, so the audio data can stay on the GPU.

Another problem is that not all algorithms are a natural match to the GPU architecture which favors heavy parallel processing, something that isn't possible if you're e.g. filtering a single stream of samples.

Finally, GPU drivers are often buggy (we run into this on a daily basis with our use of graphics acceleration in our plug-in interface) so I would expect some difficulty with getting this to work reliably on a wide range of GPU cards, drivers, and operating systems.

So I wouldn't expect this in a studio near you anytime soon. ;)

Cheers,

Frederik (FabFilter)

Maybe it's time to think about this again.

They seem to be making notable progress, as demonstrated this week at NAMM 2022.

www.youtube.com/watch?v=IFdgymosszA

John

You're better off leveraging Neural Engines and Machine Learning with FPGA/DSPs on their own engines that today AMD announced the Zen 5 will incorporate Xilinx IP for both.

M Series from Apple has it already. This only leaves Intel whose headed that way as we speak.

This also frees up vendors from having to write a bunch of new code in the short run, while leveraging large amounts of vectorized code for the engines.

the GPU can always accelerate the interface as in Metal FX, Vulkan and DirectX12.

AMD also announced RDNA 3.0 will be chiplet based like Zen designs with a new Infinity Fabric, listed their roadmap for the next few years including the RDNA 4.0 and Zen 5 series. By Zen 5 GPUs, Neural Engines, etc., will all be chiplets across one unified memory backplane in the Terabyte bandwith range.

Marc Driftmeyer

Going to be interesting to see what develops, and yes it seems that the Apple M chips have a lot of ability in this area already.

John

Reply to this topic: