Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/).

To run this test with the Phoronix Test Suite, the basic command is: phoronix-test-suite benchmark deepsparse.

Project Site

neuralmagic.com

Source Repository

github.com

Test Created

13 October 2022

Last Updated

15 March 2024

Test Maintainer

Michael Larabel 

Test Type

System

Average Install Time

20 Minutes, 53 Seconds

Average Run Time

3 Minutes, 4 Seconds

Test Dependencies

Python

Accolades

30k+ Downloads

Supported Platforms


Public Result Uploads *Reported Installs **Reported Test Completions **Test Profile Page ViewsOpenBenchmarking.orgEventsNeural Magic DeepSparse Popularity Statisticspts/deepsparse2022.102022.112022.122023.012023.022023.032023.042023.052023.062023.072023.082023.092023.102023.112023.122024.012024.022024.032024.042024.055K10K15K20K25K
* Uploading of benchmark result data to OpenBenchmarking.org is always optional (opt-in) via the Phoronix Test Suite for users wishing to share their results publicly.
** Data based on those opting to upload their test results to OpenBenchmarking.org and users enabling the opt-in anonymous statistics reporting while running benchmarks from an Internet-connected platform.
Data updated weekly as of 6 May 2024.
NLP Document Classification, oBERT base uncased on IMDB9.1%CV Classification, ResNet-50 ImageNet9.1%NLP Text Classification, DistilBERT mnli9.1%Llama2 Chat 7b Quantized8.7%BERT-Large, NLP Question Answering, Sparse INT89.1%NLP Text Classification, BERT base uncased SST2, Sparse INT89.1%CV Detection, YOLOv5s COCO, Sparse INT89.1%CV Segmentation, 90% Pruned YOLACT Pruned9.1%ResNet-50, Sparse INT89.1%NLP Token Classification, BERT base uncased conll20039.1%ResNet-50, Baseline9.1%Model Option PopularityOpenBenchmarking.org

Revision History

pts/deepsparse-1.7.0   [View Source]   Fri, 15 Mar 2024 12:35:17 GMT
Update against DeepSparse 1.7 upstream, add Llama 2 chat test.

pts/deepsparse-1.6.0   [View Source]   Mon, 11 Dec 2023 16:59:10 GMT
Update against deepsparse 1.6 upstream.

pts/deepsparse-1.5.2   [View Source]   Wed, 26 Jul 2023 15:52:28 GMT
Update against 1.5.2 point release, add more models.

pts/deepsparse-1.5.0   [View Source]   Wed, 07 Jun 2023 07:51:58 GMT
Update against Deepsparse 1.5 upstream.

pts/deepsparse-1.3.2   [View Source]   Sun, 22 Jan 2023 19:05:03 GMT
Update against DeepSparse 1.3.2 upstream.

pts/deepsparse-1.0.1   [View Source]   Thu, 13 Oct 2022 13:47:39 GMT
Initial commit of DeepSparse benchmark.

Suites Using This Test

Machine Learning

HPC - High Performance Computing


Performance Metrics

Analyze Test Configuration:

Neural Magic DeepSparse 1.7

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.org metrics for this test profile configuration based on 67 public results since 15 March 2024 with the latest data as of 26 March 2024.

Below is an overview of the generalized performance for components where there is sufficient statistically significant data based upon user-uploaded results. It is important to keep in mind particularly in the Linux/open-source space there can be vastly different OS configurations, with this overview intended to offer just general guidance as to the performance expectations.

Component
Percentile Rank
# Compatible Public Results
items/sec (Average)
97th
5
327 +/- 1
78th
7
261 +/- 2
Mid-Tier
75th
< 261
63rd
3
215 +/- 1
Median
50th
193
36th
4
160 +/- 2
30th
4
142 +/- 2
Low-Tier
25th
< 133
19th
3
127 +/- 5
OpenBenchmarking.orgDistribution Of Public Results - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream67 Results Range From 85 To 329 items/sec851011171331491651811972132292452612772933093253413691215

Based on OpenBenchmarking.org data, the selected test / test configuration (Neural Magic DeepSparse 1.7 - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream) has an average run-time of 3 minutes. By default this test profile is set to run at least 3 times but may increase if the standard deviation exceeds pre-defined defaults or other calculations deem additional runs necessary for greater statistical accuracy of the result.

OpenBenchmarking.orgMinutesTime Required To Complete BenchmarkModel: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-StreamRun-Time246810Min: 2 / Avg: 2.27 / Max: 3

Tested CPU Architectures

This benchmark has been successfully tested on the below mentioned architectures. The CPU architectures listed is where successful OpenBenchmarking.org result uploads occurred, namely for helping to determine if a given test is compatible with various alternative CPU architectures.

CPU Architecture
Kernel Identifier
Verified On
Intel / AMD x86 64-bit
x86_64
(Many Processors)
ARMv8 64-bit
aarch64
ARMv8 Neoverse-N1 128-Core