I’m often challenged with finding top n observations where there may be several continuous variables to consider. Statistical tests, scaling, and simple addition are all great tools for doing this, but matrix multiplication on GPUs seems like it gives more control. Ground truth can be computationally and memory expensive to set up, but incredibly fast in deployment. Gradient descent is relatively low on compute and memory. Both, Ground truth and Gradient descent make great use of GPUs. Is that enough to stick an AI label onto my solutions?