You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+22-11Lines changed: 22 additions & 11 deletions
Original file line number
Diff line number
Diff line change
@@ -132,6 +132,7 @@ python -m tf2onnx.convert
132
132
[--inputs GRAPH_INPUTS]
133
133
[--outputs GRAPH_OUTPUS]
134
134
[--inputs-as-nchw inputs_provided_as_nchw]
135
+
[--outputs-as-nchw outputs_provided_as_nchw]
135
136
[--opset OPSET]
136
137
[--dequantize]
137
138
[--tag TAG]
@@ -180,9 +181,13 @@ TensorFlow model's input/output names, which can be found with [summarize graph
180
181
181
182
By default we preserve the image format of inputs (`nchw` or `nhwc`) as given in the TensorFlow model. If your hosts (for example windows) native format nchw and the model is written for nhwc, ```--inputs-as-nchw``` tensorflow-onnx will transpose the input. Doing so is convenient for the application and the converter in many cases can optimize the transpose away. For example ```--inputs input0:0,input1:0 --inputs-as-nchw input0:0``` assumes that images are passed into ```input0:0``` as nchw while the TensorFlow model given uses nhwc.
182
183
184
+
#### --outputs-as-nchw
185
+
186
+
Similar usage with `--inputs-as-nchw`. By default we preserve the format of outputs (`nchw` or `nhwc`) as shown in the TensorFlow model. If your hosts native format nchw and the model is written for nhwc, ```--outputs-as-nchw``` tensorflow-onnx will transpose the output and optimize the transpose away. For example ```--outputs output0:0,output1:0 --outputs-as-nchw output0:0``` will change the ```output0:0``` as nchw while the TensorFlow model given uses nhwc.
187
+
183
188
#### --ignore_default, --use_default
184
189
185
-
ONNX requires default values for graph inputs to be constant, while Tensorflow's PlaceholderWithDefault op accepts computed defaults. To convert such models, pass a comma-separated list of node names to the ignore_default and/or use_default flags. PlaceholderWithDefault nodes with matching names will be replaced with Placeholder or Identity ops, respectively.
190
+
ONNX requires default values for graph inputs to be constant, while Tensorflow's PlaceholderWithDefault op accepts computed defaults. To convert such models, pass a comma-separated list of node names to the ignore_default and/or use_default flags. PlaceholderWithDefault nodes with matching names will be replaced with Placeholder or Identity ops, respectively.
186
191
187
192
#### --opset
188
193
@@ -208,15 +213,9 @@ Only valid with parameter `--saved_model`. Specifies which signature to use with
208
213
209
214
Only valid with parameter `--saved_model`. If a model contains a list of concrete functions, under the function name `__call__` (as can be viewed using the command `saved_model_cli show --all`), this parameter is a 0-based integer specifying which function in that list should be converted. This parameter takes priority over `--signature_def`, which will be ignored.
210
215
211
-
#### --large_model
212
-
213
-
(Can be used only for TF2.x models)
214
-
215
-
Only valid with parameter `--saved_model`. When set, creates a zip file containing the ONNX protobuf model and large tensor values stored externally. This allows for converting models that exceed the 2 GB protobuf limit.
216
-
217
-
#### --output_frozen_graph
216
+
#### --target
218
217
219
-
Saves the frozen and optimize tensorflow graph to file.
218
+
Some models require special handling to run on some runtimes. In particular, the model may use unsupported data types. Workarounds are activated with ```--target TARGET```. Currently supported values are listed on this [wiki](https://github.com/onnx/tensorflow-onnx/wiki/target). If your model will be run on Windows ML, you should specify the appropriate target value.
220
219
221
220
#### --custom-ops
222
221
@@ -229,9 +228,21 @@ will be used.
229
228
230
229
Load the comma-separated list of tensorflow plugin/op libraries before conversion.
231
230
232
-
#### --target
231
+
#### --large_model
233
232
234
-
Some models require special handling to run on some runtimes. In particular, the model may use unsupported data types. Workarounds are activated with ```--target TARGET```. Currently supported values are listed on this [wiki](https://github.com/onnx/tensorflow-onnx/wiki/target). If your model will be run on Windows ML, you should specify the appropriate target value.
233
+
(Can be used only for TF2.x models)
234
+
235
+
Only valid with parameter `--saved_model`. When set, creates a zip file containing the ONNX protobuf model and large tensor values stored externally. This allows for converting models whose size exceeds the 2 GB.
236
+
237
+
#### --continue_on_error
238
+
Continue to run conversion on error, ignore graph cycles so it can report all missing ops and errors.
239
+
240
+
#### --verbose
241
+
Verbose detailed output for diagnostic purposes.
242
+
243
+
#### --output_frozen_graph
244
+
245
+
Save the frozen and optimized tensorflow graph to a file for debug.
235
246
236
247
237
248
### <aname="summarize_graph"></a>Tool to get Graph Inputs & Outputs
0 commit comments