Skip to content

Commit 534defd

Browse files
yu-frankFrank Yu
and
Frank Yu
authored
Fix typo in executorch documentation (#9000)
Summary: Fix a typo in executorch documentation https://pytorch.org/executorch/main/backends-xnnpack.html Reviewed By: cccclai Differential Revision: D70645356 Co-authored-by: Frank Yu <[email protected]>
1 parent 03f064b commit 534defd

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

docs/source/backends-xnnpack.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# XNNPACK Backend
22

3-
The XNNPACK delegate is the ExecuTorch solution for CPU execution on mobile CPUs. XNNPACK is a library that provides optimized kernels for machine learning operators on Arm and x86 CPUs.
3+
The XNNPACK delegate is the ExecuTorch solution for CPU execution on mobile CPUs. XNNPACK is a library that provides optimized kernels for machine learning operators on Arm and x86 CPUs.
44

55
## Features
66

@@ -51,7 +51,7 @@ The XNNPACK partitioner API allows for configuration of the model delegation to
5151

5252
### Quantization
5353

54-
The XNNPACK delegate can also be used as a backend to execute symmetrically quantized models. To quantize a PyTorch model for the XNNPACK backend, use the `XNNPACKQuantizer`. `Quantizers` are backend specific, which means the `XNNPACKQuantizer` is configured to quantize models to leverage the quantized operators offered by the XNNPACK Library.
54+
The XNNPACK delegate can also be used as a backend to execute symmetrically quantized models. To quantize a PyTorch model for the XNNPACK backend, use the `XNNPACKQuantizer`. `Quantizers` are backend specific, which means the `XNNPACKQuantizer` is configured to quantize models to leverage the quantized operators offered by the XNNPACK Library.
5555

5656
### Configuring the XNNPACKQuantizer
5757

@@ -95,7 +95,7 @@ for cal_sample in cal_samples: # Replace with representative model inputs
9595

9696
quantized_model = convert_pt2e(prepared_model)
9797
```
98-
For static, post-training quantization (PTQ), the post-prepare\_pt2e model should beS run with a representative set of samples, which are used to determine the quantization parameters.
98+
For static, post-training quantization (PTQ), the post-prepare\_pt2e model should be run with a representative set of samples, which are used to determine the quantization parameters.
9999

100100
After `convert_pt2e`, the model can be exported and lowered using the normal ExecuTorch XNNPACK flow. For more information on PyTorch 2 quantization [here](https://pytorch.org/tutorials/prototype/pt2e_quant_ptq.html).
101101

0 commit comments

Comments
 (0)