For our AI testing, we use UL Procyon’s Professional Benchmark Suite. We focus on its AI Computer Vision (which gives us a gauge on how our systems operate when dealing with inference engine tasks), and AI Image Generation benchmarks.
For Computer Vision, we report on the total inferences performed, the average inference time, plus the Index score across all tests. The Index score is more valuable, as it’s the time taken between inferences that indicates performance. We run all tests on AI Vision using Float16 precision. We test both the CPU and the GPU under Microsoft’s ML API.
For Image Generation, we test under Stable Diffusion 1.5 (Float16 precision), using all engines available. However for comparison, we only include scores from the same engine. We report on the Index score, the time taken, and the image generation speed. All results are averaged across all tests. Best scores, where appropriate, are in bold.