No more creating code that will run on only one platform. Create you algorithm in G²CPU and let the backend do the rest. Running it on a CUDA enabled device, an OpenCL GPU or fall back to the CPU whilst enabeling multithreading. All can be decided during runtime so your customers can pivot while you can focus on whats important.
Code like you are used to. G²CPU make GPU computing easy by mirroring most of the common LabVIEW functions. Use what you already know and bring it to all computing platforms, from multithreaded CPU to CUDA to OpenCL.
During development time G²CPU will evaluate your code to make sure there are no syntax errors. Incorrect array sizes, datatypes, ... all are checked so you can rest assured that your code is in the best condition. Even before you start testing.
Debugging code on a GPU has never been easier. Be able to directly pull data from the target backend by just placing a probe on the block diagram. You will immediately see the contents no matter where the computation is running from CPU to CUDA. Not only that, probes will give you the tools to interpret the vast amounts of data accompanied with GPU computing.
Don't you just hate "out of memory" errors? With G²CPU these are a thing of the past. Each function will evaluate all conditions and give you feedback during runtime. Allowing you to handle any system error as you see fit.
We will never hold you back. If for any reason you want to access data for custom function calls you can do so at any time. Not only that, you can send over the pointers to other environments using ArrayFire and let other groups pick up where you left off.
As the coding landscape is everchanging so are your needs.
Should you feel something is missing, don't hesitate to let us know.