You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
LLM custom ops tutorial should direct to general custom ops (#10139)
Instead of having two pages using the same custom example, consolidate
into one. Also move Source-to-source transformation to
kernel-library-custom-aten-kernel.md
The custom operator can explicitly used in the PyTorch model, or you can write a transformation to replace instances of a core operator with the custom variant. For this example, you could find
305
+
all instances of `torch.nn.Linear` and replace them with `CustomLinear`.
Copy file name to clipboardExpand all lines: docs/source/llm/getting-started.md
+3-89
Original file line number
Diff line number
Diff line change
@@ -855,99 +855,13 @@ With the ExecuTorch custom operator APIs, custom operator and kernel authors can
855
855
856
856
There are three steps to use custom kernels in ExecuTorch:
857
857
858
-
1. Write the custom kernel using ExecuTorch types.
859
-
2. Compile and link the custom kernel to both AOT Python environment as well as the runtime binary.
860
-
3. Source-to-source transformation to swap an operator with a custom op.
861
-
862
-
### Writing a Custom Kernel
863
-
864
-
Define your custom operator schema for both functional variant (used in AOT compilation) and out variant (used in ExecuTorch runtime). The schema needs to follow PyTorch ATen convention (see [native_functions.yaml](https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/native/native_functions.yaml)).
Write your custom kernel according to the schema defined above. Use the `EXECUTORCH_LIBRARY` macro to make the kernel available to the ExecuTorch runtime.
To make it available to the ExecuTorch runtime, compile custom_linear.h/cpp into the binary target. You can also build the kernel as a dynamically loaded library (.so or .dylib) and link it as well.
917
-
918
-
To make it available to PyTorch, package custom_linear.h, custom_linear.cpp and custom_linear_pytorch.cpp into a dynamically loaded library (.so or .dylib) and load it into the python environment.
919
-
This is needed to make PyTorch aware of the custom operator at the time of export.
920
-
921
-
```python
922
-
import torch
923
-
torch.ops.load_library("libcustom_linear.so")
924
-
```
925
-
926
-
Once loaded, you can use the custom operator in PyTorch code.
858
+
1.[Write the custom kernel](../kernel-library-custom-aten-kernel.md#c-api-for-custom-ops) using ExecuTorch types.
859
+
2.[Compile and link the custom kernel](../kernel-library-custom-aten-kernel.md#compile-and-link-the-custom-kernel) to both AOT Python environment as well as the runtime binary.
860
+
3.[Source-to-source transformation](../kernel-library-custom-aten-kernel.md#using-a-custom-operator-in-a-model) to swap an operator with a custom op.
927
861
928
862
For more information, see [PyTorch Custom Operators](https://pytorch.org/tutorials/advanced/torch_script_custom_ops.html) and
929
863
and [ExecuTorch Kernel Registration](../kernel-library-custom-aten-kernel.md).
930
864
931
-
### Using a Custom Operator in a Model
932
-
933
-
The custom operator can explicitly used in the PyTorch model, or you can write a transformation to replace instances of a core operator with the custom variant. For this example, you could find
934
-
all instances of `torch.nn.Linear` and replace them with `CustomLinear`.
0 commit comments