Skip to content

Commit c57449d

Browse files
hwangdeyufatcat-z
andauthored
Update CLI reference doc, adjust perameter orders and add output-as-nchw (#2028)
* Update CLI reference doc Signed-off-by: Deyu Huang <[email protected]> Co-authored-by: Jay Zhang <[email protected]>
1 parent f4902a4 commit c57449d

File tree

1 file changed

+22
-11
lines changed

1 file changed

+22
-11
lines changed

README.md

Lines changed: 22 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -132,6 +132,7 @@ python -m tf2onnx.convert
132132
[--inputs GRAPH_INPUTS]
133133
[--outputs GRAPH_OUTPUS]
134134
[--inputs-as-nchw inputs_provided_as_nchw]
135+
[--outputs-as-nchw outputs_provided_as_nchw]
135136
[--opset OPSET]
136137
[--dequantize]
137138
[--tag TAG]
@@ -180,9 +181,13 @@ TensorFlow model's input/output names, which can be found with [summarize graph
180181

181182
By default we preserve the image format of inputs (`nchw` or `nhwc`) as given in the TensorFlow model. If your hosts (for example windows) native format nchw and the model is written for nhwc, ```--inputs-as-nchw``` tensorflow-onnx will transpose the input. Doing so is convenient for the application and the converter in many cases can optimize the transpose away. For example ```--inputs input0:0,input1:0 --inputs-as-nchw input0:0``` assumes that images are passed into ```input0:0``` as nchw while the TensorFlow model given uses nhwc.
182183

184+
#### --outputs-as-nchw
185+
186+
Similar usage with `--inputs-as-nchw`. By default we preserve the format of outputs (`nchw` or `nhwc`) as shown in the TensorFlow model. If your hosts native format nchw and the model is written for nhwc, ```--outputs-as-nchw``` tensorflow-onnx will transpose the output and optimize the transpose away. For example ```--outputs output0:0,output1:0 --outputs-as-nchw output0:0``` will change the ```output0:0``` as nchw while the TensorFlow model given uses nhwc.
187+
183188
#### --ignore_default, --use_default
184189

185-
ONNX requires default values for graph inputs to be constant, while Tensorflow's PlaceholderWithDefault op accepts computed defaults. To convert such models, pass a comma-separated list of node names to the ignore_default and/or use_default flags. PlaceholderWithDefault nodes with matching names will be replaced with Placeholder or Identity ops, respectively.
190+
ONNX requires default values for graph inputs to be constant, while Tensorflow's PlaceholderWithDefault op accepts computed defaults. To convert such models, pass a comma-separated list of node names to the ignore_default and/or use_default flags. PlaceholderWithDefault nodes with matching names will be replaced with Placeholder or Identity ops, respectively.
186191

187192
#### --opset
188193

@@ -208,15 +213,9 @@ Only valid with parameter `--saved_model`. Specifies which signature to use with
208213

209214
Only valid with parameter `--saved_model`. If a model contains a list of concrete functions, under the function name `__call__` (as can be viewed using the command `saved_model_cli show --all`), this parameter is a 0-based integer specifying which function in that list should be converted. This parameter takes priority over `--signature_def`, which will be ignored.
210215

211-
#### --large_model
212-
213-
(Can be used only for TF2.x models)
214-
215-
Only valid with parameter `--saved_model`. When set, creates a zip file containing the ONNX protobuf model and large tensor values stored externally. This allows for converting models that exceed the 2 GB protobuf limit.
216-
217-
#### --output_frozen_graph
216+
#### --target
218217

219-
Saves the frozen and optimize tensorflow graph to file.
218+
Some models require special handling to run on some runtimes. In particular, the model may use unsupported data types. Workarounds are activated with ```--target TARGET```. Currently supported values are listed on this [wiki](https://github.com/onnx/tensorflow-onnx/wiki/target). If your model will be run on Windows ML, you should specify the appropriate target value.
220219

221220
#### --custom-ops
222221

@@ -229,9 +228,21 @@ will be used.
229228

230229
Load the comma-separated list of tensorflow plugin/op libraries before conversion.
231230

232-
#### --target
231+
#### --large_model
233232

234-
Some models require special handling to run on some runtimes. In particular, the model may use unsupported data types. Workarounds are activated with ```--target TARGET```. Currently supported values are listed on this [wiki](https://github.com/onnx/tensorflow-onnx/wiki/target). If your model will be run on Windows ML, you should specify the appropriate target value.
233+
(Can be used only for TF2.x models)
234+
235+
Only valid with parameter `--saved_model`. When set, creates a zip file containing the ONNX protobuf model and large tensor values stored externally. This allows for converting models whose size exceeds the 2 GB.
236+
237+
#### --continue_on_error
238+
Continue to run conversion on error, ignore graph cycles so it can report all missing ops and errors.
239+
240+
#### --verbose
241+
Verbose detailed output for diagnostic purposes.
242+
243+
#### --output_frozen_graph
244+
245+
Save the frozen and optimized tensorflow graph to a file for debug.
235246

236247

237248
### <a name="summarize_graph"></a>Tool to get Graph Inputs & Outputs

0 commit comments

Comments
 (0)