neu

AMD EPYC 7R13 48-Core testing with a Supermicro H12SSL-I v1.02 (2.7 BIOS) and ASPEED 24GB on EndeavourOS rolling via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2403106-NE-NEU68790101
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
AMD EPYC 7R13 48-Core - ASPEED 24GB - Supermicro
March 10
  1 Minute


neuOpenBenchmarking.orgPhoronix Test SuiteAMD EPYC 7R13 48-Core @ 3.73GHz (48 Cores / 96 Threads)Supermicro H12SSL-I v1.02 (2.7 BIOS)AMD Starship/Matisse256GB15363GB Micron_7450_MTFDKCC15T3TFRASPEED 24GBNVIDIA AD102 HD Audio38GN9502 x Intel X710 for 10GbE SFP+EndeavourOS rolling6.7.9-zen1-1-zen (x86_64)X Server 1.21.1.11NVIDIAGCC 13.2.1 20230801 + Clang 17.0.6 + LLVM 17.0.6 + CUDA 12.4btrfs1024x768ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDisplay ServerDisplay DriverCompilerFile-SystemScreen ResolutionNeu BenchmarksSystem Logs- Transparent Huge Pages: always- NVCC_PREPEND_FLAGS="-ccbin /opt/cuda/bin"- --disable-libssp --disable-libstdcxx-pch --disable-werror --enable-__cxa_atexit --enable-bootstrap --enable-cet=auto --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-default-ssp --enable-gnu-indirect-function --enable-gnu-unique-object --enable-languages=ada,c,c++,d,fortran,go,lto,m2,objc,obj-c++ --enable-libstdcxx-backtrace --enable-link-serialization=1 --enable-lto --enable-multilib --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-build-config=bootstrap-lto --with-linker-hash-style=gnu - Scaling Governor: amd-pstate-epp performance (EPP: performance) - CPU Microcode: 0xa0011d3 - gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

Llama.cpp

Llama.cpp is a port of Facebook's LLaMA model in C/C++ developed by Georgi Gerganov. Llama.cpp allows the inference of LLaMA and other supported models in C/C++. For CPU inference Llama.cpp supports AVX2/AVX-512, ARM NEON, and other modern ISAs along with features like OpenBLAS usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b1808Model: llama-2-7b.Q4_0.ggufAMD EPYC 7R13 48-Core - ASPEED 24GB - Supermicro714212835SE +/- 0.17, N = 327.831. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -march=native -mtune=native -fopenmp -lopenblas