You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+6-4Lines changed: 6 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,17 +9,19 @@ One of the most important and challenging aspect of benchmarking is deciding how
9
9
## Benchmarks
10
10
Typically a benchmark reports either the amount of work done over a constant amount of time or it reports the time taken to do a constant amount of work. The benchmarks here all do the later. The initial commit of the benchmarks available have been pulled from Sightglass however the benchmarks used with WasmScore come from the local directory here and have no dependency on the benchmarks stored in the Sightglass repo. However, how the benchmarks here are built and run do directly dependent on changes to the external Sightglass repo.
11
11
12
-
Benchmarks are often categorized based on their purpose and origin. Two such buckets are (1) codes written with the original intent of being user facing (hot paths in library codes, a typical application usage, etc) and (2) codes written specifically to target benchmarking some important or commonly used code construct or platform component. WasmScore does not aim to favor either of these benchmarking buckets as both are valuable in the evaluation of standalone Wasm performance, depending on what you want to test and what you are trying to achieve.
12
+
Benchmarks are often categorized based on their purpose and origin. Two such buckets are (1) codes written with the original intent of being user facing (hot paths in library codes, a typical application usage, etc) and (2) codes written specifically to target benchmarking some important or commonly used code construct or platform component. WasmScore does not aim to favor either of these benchmarking buckets as both are valuable the evaluation of standalone Wasm performance depending on what you want to test and what you are trying to achieve.
13
13
14
-
## Benchmark principles
14
+
## WasmScore principles
15
15
WasmScore aims to serve as a standalone Wasm benchmark and benchmarking framework that:
16
16
- Is convenient to build and run with useful and easy to interpret results.
17
17
- Is portable, enabling cross-platform comparisons.
18
18
- Provides a breadth of coverage for typical current standalone use cases and expected future use cases.
19
19
- Can be executed in a way that is convenient to analyze.
20
20
21
-
## WasmScore Suites
21
+
## WasmScore Tests
22
22
Any number of test can be created but "wasmscore" is the initial and default test. It includes a mix of relevant in use codes and platform targeted benchmarks for testing Wasm performance outside the browser. The test is a collection of several subtests (also referred to as suites):
23
+
24
+
### wasmscore (default):
23
25
- App: [‘Meshoptimizer’]
24
26
- Core: [‘Ackermann', ‘Ctype', ‘Fibonacci’]
25
27
- Crypto: [‘Base64', ‘Ed25519', ‘Seqhash']
@@ -31,7 +33,7 @@ Next steps include:
31
33
- Improving stability and user experience
32
34
- Adding benchmarks to the AI, Regex, and App suites
33
35
- Adding more benchmarks
34
-
- Complete the SIMD test
36
+
- Complete the "simdscore" test
35
37
- Publish a list of planned milestone with corresponding releases
0 commit comments