7950X_4090_ubuntu22_pytorch

7950X_4090_ubuntu22_pytorch

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2404018-NE-7950X409062
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
7950X_4090_ubuntu22_pytorch
April 01
  2 Hours, 8 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


7950X_4090_ubuntu22_pytorchOpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen 9 7950X 16-Core @ 5.88GHz (16 Cores / 32 Threads)0KDR38 (1.10.1 BIOS)AMD Device 14d84 x 32 GB DDR5-3600MT/s M323R4GA3BB0-CQKODCA6-8D2048-Q11 NVMe SSSTC 2048GB + 2000GB Seagate ST2000DM008-2UB1NVIDIA GeForce RTX 4090 24GBNVIDIA Device 22baLG HDR DQHDRealtek RTL8125 2.5GbE + Qualcomm Atheros QCNFA765Ubuntu 22.046.5.0-26-generic (x86_64)GNOME Shell 42.9X Server 1.21.1.4NVIDIA 550.54.144.6.0OpenCL 3.0 CUDA 12.4.891.3.277GCC 11.4.0 + CUDA 12.4ext43840x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLOpenCLVulkanCompilerFile-SystemScreen Resolution7950X_4090_ubuntu22_pytorch BenchmarksSystem Logs- Transparent Huge Pages: madvise- Scaling Governor: amd-pstate-epp powersave (EPP: performance) - CPU Microcode: 0xa601203- Python 3.10.12- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Vulnerable: Safe RET no microcode + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

7950X_4090_ubuntu22_pytorchpytorch: NVIDIA CUDA GPU - 1 - ResNet-50pytorch: NVIDIA CUDA GPU - 1 - ResNet-152pytorch: NVIDIA CUDA GPU - 16 - ResNet-50pytorch: NVIDIA CUDA GPU - 32 - ResNet-50pytorch: NVIDIA CUDA GPU - 64 - ResNet-50pytorch: NVIDIA CUDA GPU - 16 - ResNet-152pytorch: NVIDIA CUDA GPU - 256 - ResNet-50pytorch: NVIDIA CUDA GPU - 32 - ResNet-152pytorch: NVIDIA CUDA GPU - 512 - ResNet-50pytorch: NVIDIA CUDA GPU - 64 - ResNet-152pytorch: NVIDIA CUDA GPU - 256 - ResNet-152pytorch: NVIDIA CUDA GPU - 512 - ResNet-152pytorch: NVIDIA CUDA GPU - 1 - Efficientnet_v2_lpytorch: NVIDIA CUDA GPU - 16 - Efficientnet_v2_lpytorch: NVIDIA CUDA GPU - 32 - Efficientnet_v2_lpytorch: NVIDIA CUDA GPU - 64 - Efficientnet_v2_lpytorch: NVIDIA CUDA GPU - 256 - Efficientnet_v2_lpytorch: NVIDIA CUDA GPU - 512 - Efficientnet_v2_l7950X_4090_ubuntu22_pytorch389.01138.05380.15380.93380.31138.68379.16138.82379.59139.49139.57139.5971.9969.6069.7369.4769.6469.49OpenBenchmarking.org

PyTorch

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: ResNet-507950X_4090_ubuntu22_pytorch80160240320400SE +/- 1.58, N = 6389.01MIN: 274.46 / MAX: 406.66

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: ResNet-1527950X_4090_ubuntu22_pytorch306090120150SE +/- 0.45, N = 20138.05MIN: 82.9 / MAX: 144.86

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: ResNet-507950X_4090_ubuntu22_pytorch80160240320400SE +/- 0.48, N = 100380.15MIN: 259.6 / MAX: 403.8

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: ResNet-507950X_4090_ubuntu22_pytorch80160240320400SE +/- 1.00, N = 20380.93MIN: 301.68 / MAX: 395.65

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: ResNet-507950X_4090_ubuntu22_pytorch80160240320400SE +/- 1.01, N = 20380.31MIN: 295 / MAX: 393.65

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: ResNet-1527950X_4090_ubuntu22_pytorch306090120150SE +/- 0.42, N = 20138.68MIN: 110.38 / MAX: 145.41

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: ResNet-507950X_4090_ubuntu22_pytorch80160240320400SE +/- 0.98, N = 20379.16MIN: 313.69 / MAX: 393.28

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: ResNet-1527950X_4090_ubuntu22_pytorch306090120150SE +/- 0.38, N = 20138.82MIN: 115.08 / MAX: 144.91

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: ResNet-507950X_4090_ubuntu22_pytorch80160240320400SE +/- 1.20, N = 20379.59MIN: 317.27 / MAX: 395.2

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: ResNet-1527950X_4090_ubuntu22_pytorch306090120150SE +/- 0.33, N = 20139.49MIN: 116.37 / MAX: 144.17

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: ResNet-1527950X_4090_ubuntu22_pytorch306090120150SE +/- 0.40, N = 20139.57MIN: 113.63 / MAX: 144.53

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: ResNet-1527950X_4090_ubuntu22_pytorch306090120150SE +/- 0.55, N = 20139.59MIN: 114.91 / MAX: 146.81

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: NVIDIA CUDA GPU - Batch Size: 1 - Model: Efficientnet_v2_l7950X_4090_ubuntu22_pytorch1632486480SE +/- 0.23, N = 2071.99MIN: 59.11 / MAX: 75.02

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: NVIDIA CUDA GPU - Batch Size: 16 - Model: Efficientnet_v2_l7950X_4090_ubuntu22_pytorch1530456075SE +/- 0.22, N = 2069.60MIN: 56.88 / MAX: 72.75

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: NVIDIA CUDA GPU - Batch Size: 32 - Model: Efficientnet_v2_l7950X_4090_ubuntu22_pytorch1632486480SE +/- 0.21, N = 2069.73MIN: 57.6 / MAX: 73.33

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: NVIDIA CUDA GPU - Batch Size: 64 - Model: Efficientnet_v2_l7950X_4090_ubuntu22_pytorch1530456075SE +/- 0.25, N = 2069.47MIN: 57.52 / MAX: 72.93

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: NVIDIA CUDA GPU - Batch Size: 256 - Model: Efficientnet_v2_l7950X_4090_ubuntu22_pytorch1530456075SE +/- 0.16, N = 2069.64MIN: 54.54 / MAX: 73

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: NVIDIA CUDA GPU - Batch Size: 512 - Model: Efficientnet_v2_l7950X_4090_ubuntu22_pytorch1530456075SE +/- 0.16, N = 2069.49MIN: 41.96 / MAX: 73.49