system_test_MNN

Intel Core i5-10210U testing with a HUAWEI NBLB-WAX9N-PCB (1.45 BIOS) and Intel CometLake-U GT2 [UHD ] 8GB on Debian 12 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2405085-AMET-SYSTEMT33
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
System_Test
May 08
  7 Hours, 48 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


system_test_MNNOpenBenchmarking.orgPhoronix Test SuiteIntel Core i5-10210U @ 4.20GHz (4 Cores / 8 Threads)HUAWEI NBLB-WAX9N-PCB (1.45 BIOS)Intel Comet Lake PCH-LP2 x 4 GB DDR4-2667MT/s Samsung K4A8G165WC-BCTD512GB Western Digital PC SN730 SDBPNTY-512G-1027Intel CometLake-U GT2 [UHD ] 8GB (1100MHz)Intel Comet Lake PCH-LP cAVSIntel Comet Lake PCH-LP CNVi WiFiDebian 126.1.0-18-amd64 (x86_64)KDE Plasma 5.27.5X Server 1.21.1.7modesetting 1.21.14.6 Mesa 22.3.6GCC 12.2.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionSystem_test_MNN BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-bTRWOB/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-bTRWOB/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave - CPU Microcode: 0xde- gather_data_sampling: Vulnerable: No microcode + itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Vulnerable: Clear buffers attempted no microcode; SMT vulnerable + retbleed: Mitigation of Enhanced IBRS + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Mitigation of Microcode + tsx_async_abort: Not affected

system_test_MNNmnn: nasnetmnn: mobilenetV3mnn: squeezenetv1.1mnn: resnet-v2-50mnn: SqueezeNetV1.0mnn: MobileNetV2_224mnn: mobilenet-v1-1.0mnn: inception-v3System_Test22.2502.9216.44156.51110.3237.2567.06572.134OpenBenchmarking.org

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnetSystem_Test510152025SE +/- 0.96, N = 922.25MIN: 14.01 / MAX: 107.62

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV3System_Test0.65721.31441.97162.62883.286SE +/- 0.093, N = 92.921MIN: 2.38 / MAX: 39.29

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.1System_Test246810SE +/- 0.215, N = 96.441MIN: 4.89 / MAX: 51.72

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-50System_Test1326395265SE +/- 1.21, N = 956.51MIN: 46.54 / MAX: 207.82

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.0System_Test3691215SE +/- 0.33, N = 910.32MIN: 8.23 / MAX: 144.76

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_224System_Test246810SE +/- 0.658, N = 97.256MIN: 5.25 / MAX: 80.74

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.0System_Test246810SE +/- 0.189, N = 97.065MIN: 5.71 / MAX: 48.57

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v3System_Test1632486480SE +/- 2.20, N = 972.13MIN: 58.44 / MAX: 280.68

8 Results Shown

Mobile Neural Network:
  nasnet
  mobilenetV3
  squeezenetv1.1
  resnet-v2-50
  SqueezeNetV1.0
  MobileNetV2_224
  mobilenet-v1-1.0
  inception-v3