Skip to content

Commit c6c0899

Browse files
authored
Make quantize_pt2 private and remove external call sites
Differential Revision: D74096228 Pull Request resolved: #10683
1 parent be2dda7 commit c6c0899

File tree

1 file changed

+2
-1
lines changed

1 file changed

+2
-1
lines changed

backends/cadence/aot/compiler.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -144,7 +144,6 @@ def fuse_pt2(
144144
return converted_graph_module
145145

146146

147-
# Note: this is the one-liner API to quantize and fuse a model.
148147
def quantize_pt2(
149148
model: torch.nn.Module,
150149
inputs: tuple[object, ...],
@@ -158,6 +157,8 @@ def quantize_pt2(
158157
not, the inputs will be used for calibration instead, which is useful for
159158
unit tests but should not be used for end-to-end use cases.
160159
Returns a GraphModule with the quantized model.
160+
Note: this function should not be called directly in general. Please use
161+
quantize_and_export_to_executorch for most needs.
161162
"""
162163
# Make the model inference mode by calling model.eval()
163164
model.eval()

0 commit comments

Comments
 (0)