intel onnx Tests for a future article. Intel Core i5-14500 testing with a ASUS PRIME Z790-P WIFI (1402 BIOS) and ASUS Intel UHD 770 ADL-S GT1 31GB on Ubuntu 23.10 via the Phoronix Test Suite. a: Processor: Intel Core i5-14600K @ 5.30GHz (14 Cores / 20 Threads), Motherboard: ASUS PRIME Z790-P WIFI (1402 BIOS), Chipset: Intel Device 7a27, Memory: 2 x 16GB DRAM-6000MT/s Corsair CMK32GX5M2B6000C36, Disk: 1024GB SOLIDIGM SSDPFKKW010X7, Graphics: ASUS Intel RPL-S 31GB (1550MHz), Audio: Realtek ALC897, Monitor: ASUS VP28U OS: Ubuntu 23.10, Kernel: 6.7.0-060700-generic (x86_64), Desktop: GNOME Shell 45.0, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 23.2.1-1ubuntu3.1, Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 1920x1080 b: Processor: Intel Core i5-14600K @ 5.30GHz (14 Cores / 20 Threads), Motherboard: ASUS PRIME Z790-P WIFI (1402 BIOS), Chipset: Intel Device 7a27, Memory: 2 x 16GB DRAM-6000MT/s Corsair CMK32GX5M2B6000C36, Disk: 1024GB SOLIDIGM SSDPFKKW010X7, Graphics: ASUS Intel RPL-S 31GB (1550MHz), Audio: Realtek ALC897, Monitor: ASUS VP28U OS: Ubuntu 23.10, Kernel: 6.7.0-060700-generic (x86_64), Desktop: GNOME Shell 45.0, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 23.2.1-1ubuntu3.1, Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 1920x1080 c: Processor: Intel Core i5-14500 @ 5.00GHz (14 Cores / 20 Threads), Motherboard: ASUS PRIME Z790-P WIFI (1402 BIOS), Chipset: Intel Device 7a27, Memory: 2 x 16GB DRAM-6000MT/s Corsair CMK32GX5M2B6000C36, Disk: 1024GB SOLIDIGM SSDPFKKW010X7, Graphics: ASUS Intel UHD 770 ADL-S GT1 31GB (1550MHz), Audio: Realtek ALC897, Monitor: ASUS VP28U OS: Ubuntu 23.10, Kernel: 6.7.0-060700-generic (x86_64), Desktop: GNOME Shell 45.0, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 23.2.1-1ubuntu3.1, Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 1920x1080 d: Processor: Intel Core i5-14500 @ 5.00GHz (14 Cores / 20 Threads), Motherboard: ASUS PRIME Z790-P WIFI (1402 BIOS), Chipset: Intel Device 7a27, Memory: 2 x 16GB DRAM-6000MT/s Corsair CMK32GX5M2B6000C36, Disk: 1024GB SOLIDIGM SSDPFKKW010X7, Graphics: ASUS Intel UHD 770 ADL-S GT1 31GB (1550MHz), Audio: Realtek ALC897, Monitor: ASUS VP28U OS: Ubuntu 23.10, Kernel: 6.7.0-060700-generic (x86_64), Desktop: GNOME Shell 45.0, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 23.2.1-1ubuntu3.1, Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 1920x1080 ONNX Runtime 1.17 Model: GPT-2 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 109.53 |================================================================= b . 113.20 |=================================================================== c . 109.36 |================================================================= d . 107.33 |================================================================ ONNX Runtime 1.17 Model: GPT-2 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 9.12514 |================================================================= b . 8.83005 |=============================================================== c . 9.13993 |================================================================= d . 9.31307 |================================================================== ONNX Runtime 1.17 Model: GPT-2 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 173.51 |================================================================== b . 174.03 |================================================================== c . 176.04 |=================================================================== d . 176.58 |=================================================================== ONNX Runtime 1.17 Model: GPT-2 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 5.76013 |================================================================== b . 5.74282 |================================================================== c . 5.67713 |================================================================= d . 5.65923 |================================================================= ONNX Runtime 1.17 Model: yolov4 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 10.03980 |============================================================== b . 10.57730 |================================================================= c . 10.17790 |=============================================================== d . 9.48916 |========================================================== ONNX Runtime 1.17 Model: yolov4 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 99.60 |=============================================================== b . 94.54 |============================================================ c . 98.25 |============================================================== d . 105.38 |=================================================================== ONNX Runtime 1.17 Model: yolov4 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 12.85 |==================================================================== b . 12.83 |==================================================================== c . 11.84 |=============================================================== d . 11.86 |=============================================================== ONNX Runtime 1.17 Model: yolov4 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 77.84 |=============================================================== b . 77.95 |=============================================================== c . 84.45 |==================================================================== d . 84.34 |==================================================================== ONNX Runtime 1.17 Model: T5 Encoder - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 134.59 |=================================================================== b . 126.44 |=============================================================== c . 133.14 |================================================================== d . 129.34 |================================================================ ONNX Runtime 1.17 Model: T5 Encoder - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 7.42879 |============================================================== b . 7.90741 |================================================================== c . 7.50952 |=============================================================== d . 7.73021 |================================================================= ONNX Runtime 1.17 Model: T5 Encoder - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 201.51 |================================================================ b . 201.77 |================================================================ c . 208.88 |=================================================================== d . 210.03 |=================================================================== ONNX Runtime 1.17 Model: T5 Encoder - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 4.96161 |================================================================== b . 4.95524 |================================================================== c . 4.78645 |================================================================ d . 4.75974 |=============================================================== ONNX Runtime 1.17 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 16.07 |==================================================================== b . 13.73 |========================================================== c . 12.59 |===================================================== d . 14.43 |============================================================= ONNX Runtime 1.17 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 62.23 |===================================================== b . 72.84 |============================================================== c . 79.43 |==================================================================== d . 69.28 |=========================================================== ONNX Runtime 1.17 Model: bertsquad-12 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 18.46 |==================================================================== b . 18.54 |==================================================================== c . 17.33 |================================================================ d . 17.32 |================================================================ ONNX Runtime 1.17 Model: bertsquad-12 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 54.17 |================================================================ b . 53.93 |================================================================ c . 57.71 |==================================================================== d . 57.74 |==================================================================== ONNX Runtime 1.17 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 266.06 |=================================================================== b . 263.02 |================================================================== c . 247.00 |============================================================== d . 249.43 |=============================================================== ONNX Runtime 1.17 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 3.75732 |============================================================= b . 3.80074 |============================================================== c . 4.04715 |================================================================== d . 4.00771 |================================================================= ONNX Runtime 1.17 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 722.92 |=================================================================== b . 710.94 |================================================================== c . 649.33 |============================================================ d . 654.42 |============================================================= ONNX Runtime 1.17 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 1.38253 |=========================================================== b . 1.40589 |============================================================ c . 1.53907 |================================================================== d . 1.52723 |================================================================= ONNX Runtime 1.17 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 1.58044 |================================================================== b . 1.52026 |=============================================================== c . 1.35743 |========================================================= d . 1.44994 |============================================================= ONNX Runtime 1.17 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 632.73 |========================================================== b . 657.78 |============================================================ c . 736.68 |=================================================================== d . 689.68 |=============================================================== ONNX Runtime 1.17 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 2.11386 |================================================================== b . 2.09046 |================================================================= c . 1.78177 |======================================================== d . 1.78007 |======================================================== ONNX Runtime 1.17 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 473.07 |======================================================== b . 478.36 |========================================================= c . 561.24 |=================================================================== d . 561.77 |=================================================================== ONNX Runtime 1.17 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 8.25464 |================================================================== b . 8.23229 |================================================================== c . 7.63408 |============================================================= d . 7.53407 |============================================================ ONNX Runtime 1.17 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 121.14 |============================================================= b . 121.47 |============================================================= c . 130.99 |================================================================== d . 132.73 |=================================================================== ONNX Runtime 1.17 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 8.51173 |================================================================== b . 8.49818 |================================================================== c . 7.78147 |============================================================ d . 7.78073 |============================================================ ONNX Runtime 1.17 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 117.48 |============================================================= b . 117.67 |============================================================= c . 128.51 |=================================================================== d . 128.52 |=================================================================== ONNX Runtime 1.17 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 135.11 |=================================================================== b . 132.06 |================================================================= c . 124.20 |============================================================== d . 125.59 |============================================================== ONNX Runtime 1.17 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 7.40035 |============================================================= b . 7.57099 |============================================================== c . 8.04979 |================================================================== d . 7.96090 |================================================================= ONNX Runtime 1.17 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 239.34 |=================================================================== b . 239.94 |=================================================================== c . 216.78 |============================================================= d . 217.53 |============================================================= ONNX Runtime 1.17 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 4.17760 |============================================================ b . 4.16707 |============================================================ c . 4.61229 |================================================================== d . 4.59654 |================================================================== ONNX Runtime 1.17 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 64.27 |=================================================================== b . 64.79 |==================================================================== c . 63.68 |=================================================================== d . 61.31 |================================================================ ONNX Runtime 1.17 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 15.56 |================================================================= b . 15.43 |================================================================ c . 15.70 |================================================================= d . 16.31 |==================================================================== ONNX Runtime 1.17 Model: super-resolution-10 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 82.64 |==================================================================== b . 82.67 |==================================================================== c . 79.19 |================================================================= d . 79.19 |================================================================= ONNX Runtime 1.17 Model: super-resolution-10 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 12.10 |================================================================= b . 12.10 |================================================================= c . 12.63 |==================================================================== d . 12.63 |==================================================================== ONNX Runtime 1.17 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 39.97 |==================================================================== b . 39.46 |=================================================================== c . 36.23 |============================================================== d . 36.16 |============================================================== ONNX Runtime 1.17 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 25.01 |============================================================== b . 25.34 |============================================================== c . 27.60 |==================================================================== d . 27.65 |==================================================================== ONNX Runtime 1.17 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 53.15 |==================================================================== b . 52.45 |=================================================================== c . 46.18 |=========================================================== d . 45.98 |=========================================================== ONNX Runtime 1.17 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 18.81 |=========================================================== b . 19.06 |============================================================ c . 21.65 |==================================================================== d . 21.75 |====================================================================