Skip to content

Commit ae9517f

Browse files
authored
Merge pull request #734 from NVIDIA/readme_update
chore: Update partitioning readme
2 parents 4d2cb14 + 8e18b2b commit ae9517f

File tree

1 file changed

+5
-8
lines changed

1 file changed

+5
-8
lines changed

Diff for: core/partitioning/README.md

+5-8
Original file line numberDiff line numberDiff line change
@@ -34,11 +34,9 @@ To enable automatic fallback feature, you can set following attributes in Python
3434
ts_model = torch.jit.script(model)
3535
trt_model = torchtrt.ts.compile(model, **{
3636
...
37-
"torch_fallback" : {
38-
"enabled" : True,
39-
"min_block_size" : 3,
40-
"forced_fallback_ops": ["aten::add"],
41-
}
37+
"min_block_size" : 3,
38+
"torch_executed_ops": ["aten::add"],
39+
"torch_executed_modules": [],
4240
})
4341
```
4442
- `enabled`: By default automatic fallback will be off. It is enabled by setting it to True.
@@ -59,9 +57,8 @@ auto in = torch::randn({1, 3, 224, 224}, {torch::kCUDA});
5957
auto mod = torch::jit::load("trt_ts_module.ts");
6058
auto input_sizes = std::vector<torchtrt::InputRange>{{in.sizes()}};
6159
torchtrt::ts::CompileSpec cfg(input_sizes);
62-
cfg.torch_fallback = torchtrt::CompileSpec::TorchFallback(true);
63-
cfg.torch_fallback.min_block_size = 2;
64-
cfg.torch_fallback.forced_fallback_ops.push_back("aten::relu");
60+
cfg.min_block_size = 2;
61+
cfg.torch_executed_ops.push_back("aten::relu");
6562
auto trt_mod = torchtrt::ts::compile(mod, cfg);
6663
auto out = trt_mod.forward({in});
6764
```

0 commit comments

Comments
 (0)