Skip to content

Commit e47d00b

Browse files
authored
Add examples for JS (#43)
* Add examples for JS * resolve comments
1 parent d2a9555 commit e47d00b

File tree

5 files changed

+181
-0
lines changed

5 files changed

+181
-0
lines changed

Diff for: js/README.md

+4
Original file line numberDiff line numberDiff line change
@@ -25,3 +25,7 @@ Click links for README of each examples.
2525
* [API usage - Tensor](api-usage_tensor) - a demonstration of basic usage of `Tensor`.
2626

2727
* [API usage - InferenceSession](api-usage_inference-session) - a demonstration of basic usage of `InferenceSession`.
28+
29+
* [API usage - SessionOptions](api-usage_session-options) - a demonstration of how to configure creation of an `InferenceSession` instance.
30+
31+
* [API usage - `ort.env` flags](api-usage_ort-env-flags) - a demonstration of how to configure a set of global flags.

Diff for: js/api-usage_inference-session/README.md

+2
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,8 @@ This example is a demonstration of basic usage of `InferenceSession`.
1010

1111
For more information about `SessionOptions` and `RunOptions`, please refer to other examples.
1212

13+
See also [`InferenceSession.create`](https://onnxruntime.ai/docs/api/js/interfaces/InferenceSessionFactory.html#create) and [`InferenceSession` interface](https://onnxruntime.ai/docs/api/js/interfaces/InferenceSession.html) in API reference document.
14+
1315
## Usage
1416

1517
```sh

Diff for: js/api-usage_ort-env-flags/README.md

+73
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,73 @@
1+
# API usage - `ort.env` flags
2+
3+
## Summary
4+
5+
This example is a demonstration of how to configure global flags by `ort.env`.
6+
7+
Following are some example code snippets:
8+
9+
```js
10+
// enable DEBUG flag
11+
ort.env.debug = true;
12+
13+
// set global logging level
14+
ort.env.logLevel = 'info';
15+
```
16+
17+
See also [`Env` interface](https://onnxruntime.ai/docs/api/js/interfaces/Env.html) in API reference document.
18+
19+
### WebAssembly flags (ONNX Runtime Web)
20+
21+
WebAssembly flags are used to customize behaviors of WebAssembly execution provider.
22+
23+
Following are some example code snippets:
24+
25+
```js
26+
// set up-to-2-threads for multi-thread execution for WebAssembly
27+
// may fallback to single-thread if multi-thread is not available in the current browser
28+
ort.env.wasm.numThreads = 2;
29+
30+
// force single-thread for WebAssembly
31+
ort.env.wasm.numThreads = 1;
32+
33+
// enable worker-proxy feature for WebAssembly
34+
// this feature allows model inferencing to run in a web worker asynchronously.
35+
ort.env.wasm.proxy = true;
36+
37+
// override path of wasm files - using a prefix
38+
// in this example, ONNX Runtime Web will try to load file from https://example.com/my-example/ort-wasm*.wasm
39+
ort.env.wasm.wasmPaths = 'https://example.com/my-example/';
40+
41+
// override path of wasm files - for each file
42+
ort.env.wasm.wasmPaths = {
43+
'ort-wasm.wasm': 'https://example.com/my-example/ort-wasm.wasm',
44+
'ort-wasm-simd.wasm': 'https://example.com/my-example/ort-wasm-simd.wasm',
45+
'ort-wasm-threaded.wasm': 'https://example.com/my-example/ort-wasm-threaded.wasm',
46+
'ort-wasm-simd-threaded.wasm': 'https://example.com/my-example/ort-wasm-simd-threaded.wasm'
47+
};
48+
```
49+
50+
See also [WebAssembly flags](https://onnxruntime.ai/docs/api/js/interfaces/Env.WebAssemblyFlags.html) in API reference document.
51+
52+
### WebGL flags (ONNX Runtime Web)
53+
54+
WebGL flags are used to customize behaviors of WebGL execution provider.
55+
56+
Following are some example code snippets:
57+
58+
```js
59+
// enable packed texture. This helps to improve inference performance for some models
60+
ort.env.webgl.pack = true;
61+
```
62+
63+
See also [WebGL flags](https://onnxruntime.ai/docs/api/js/interfaces/Env.WebGLFlags.html) in API reference document.
64+
65+
### SessionOptions vs. ort.env
66+
67+
Both `SessionOptions` and `ort.env` allow to specify configurations for inferencing behaviors. The biggest difference of them is: `SessionOptions` is set for one inference session instance, while `ort.env` is set global.
68+
69+
See also [API usage - `SessionOptions`](../api-usage_session-options) for an example of using `SessionOptions`.
70+
71+
## Usage
72+
73+
The code snippets demonstrated above cannot run standalone. Put the code into one of the "Quick Start" examples and try it out.

Diff for: js/api-usage_session-options/README.md

+101
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,101 @@
1+
# API usage - SessionOptions
2+
3+
## Summary
4+
5+
This example is a demonstration of how to configure an `InferenceSession` instance using `SessionOptions`.
6+
7+
A `SessionOptions` is an object with properties to instruct the creation of an `InferenceSession` instance. See [type declaration](https://github.com/microsoft/onnxruntime/blob/master/js/common/lib/inference-session.ts) for schema definition. `SessionOptions` is passed to `InferenceSession.create()` as the last parameter:
8+
9+
```js
10+
const mySession = await InferenceSession.create(..., mySessionOptions);
11+
```
12+
13+
### Execution providers
14+
15+
An [execution provider](https://onnxruntime.ai/docs/reference/execution-providers/) (EP) defines how operators get resolved to specific kernel implementation. Following is a table of supported EP in different environments:
16+
17+
| EP name | Hardware | available in |
18+
| ------- | ----------------- | --------------------------------- |
19+
| `cpu` | CPU (default CPU) | onnxruntime-node |
20+
| `cuda` | GPU (NVIDIA CUDA) | onnxruntime-node |
21+
| `dml` | GPU (Direct ML) | onnxruntime-node (Windows) |
22+
| `wasm` | CPU (WebAssembly) | onnxruntime-web, onnxruntime-node |
23+
| `webgl` | GPU (WebGL) | onnxruntime-web |
24+
25+
Execution provider is specified by `sessionOptions.executionProviders`. Multiple EPs can be specified and the first available one will be used.
26+
27+
Following are some example code snippets:
28+
29+
```js
30+
// [Node.js binding example] Use CPU EP.
31+
const sessionOption = { executionProviders: ['cpu'] };
32+
```
33+
34+
```js
35+
// [Node.js binding example] Use CUDA EP.
36+
const sessionOption = { executionProviders: ['cuda'] };
37+
```
38+
39+
```js
40+
// [Node.js binding example] Use CUDA EP, specifying device ID.
41+
const sessionOption = {
42+
executionProviders: [
43+
{
44+
name: 'cuda',
45+
deviceId: 0
46+
}
47+
]
48+
};
49+
```
50+
51+
```js
52+
// [Node.js binding example] Try multiple EPs using an execution provider list.
53+
// The first successfully initialized one will be used. Use CUDA EP if it is available, otherwise fallback to CPU EP.
54+
const sessionOption = { executionProviders: ['cuda', 'cpu'] };
55+
```
56+
57+
```js
58+
// [ONNX Runtime Web example] Use WebAssembly (CPU) EP.
59+
const sessionOption = { executionProviders: ['wasm'] };
60+
```
61+
62+
```js
63+
// [ONNX Runtime Web example] Use WebGL EP.
64+
const sessionOption = { executionProviders: ['webgl'] };
65+
```
66+
67+
### other common options
68+
69+
There are also some other options available for all EPs.
70+
71+
Following are some example code snippets:
72+
73+
```js
74+
// [Node.js binding example] Use CPU EP with single-thread and enable verbose logging.
75+
const sessionOption = {
76+
executionProviders: ['cpu'],
77+
interOpNumThreads: 1,
78+
intraOpNumThreads: 1,
79+
logSeverityLevel: 0
80+
};
81+
```
82+
83+
```js
84+
// [ONNX Runtime Web example] Use WebAssembly EP and enable profiling.
85+
const sessionOptions = {
86+
executionProviders: ['wasm'],
87+
enableProfiling: true
88+
};
89+
```
90+
91+
See also [`SessionOptions` interface](https://onnxruntime.ai/docs/api/js/interfaces/InferenceSession.SessionOptions.html) in API reference document.
92+
93+
### SessionOptions vs. ort.env
94+
95+
Both `SessionOptions` and `ort.env` allow to specify configurations for inferencing behaviors. The biggest difference of them is: `SessionOptions` is set for one inference session instance, while `ort.env` is set global.
96+
97+
See also [API usage - `ort.env` flags](../api-usage_ort-env-flags) for an example of using `ort.env`.
98+
99+
## Usage
100+
101+
The code snippets demonstrated above cannot run standalone. Put the code into one of the "Quick Start" examples and try it out.

Diff for: js/api-usage_tensor/README.md

+1
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,7 @@ This example is a demonstration of basic usage of `Tensor`.
77
- `tensor-create.js`: In this example, we create tensors in different ways.
88
- `tensor-properties.js`: In this example, we get tensor properties from a Tensor object.
99

10+
See also [`Tensor` constructor](https://onnxruntime.ai/docs/api/js/interfaces/TensorConstructor.html) and [`Tensor` interface](https://onnxruntime.ai/docs/api/js/interfaces/Tensor.html) in API reference document.
1011
## Usage
1112

1213
```sh

0 commit comments

Comments
 (0)