We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
1 parent 71e684b commit b484b8dCopy full SHA for b484b8d
torch/ao/quantization/_learnable_fake_quantize.py
@@ -9,8 +9,7 @@ class _LearnableFakeQuantize(torch.ao.quantization.FakeQuantizeBase):
9
10
This is an extension of the FakeQuantize module in fake_quantize.py, which
11
supports more generalized lower-bit quantization and support learning of the scale
12
- and zero point parameters through backpropagation. For literature references,
13
- please see the class _LearnableFakeQuantizePerTensorOp.
+ and zero point parameters through backpropagation.
14
15
In addition to the attributes in the original FakeQuantize module, the _LearnableFakeQuantize
16
module also includes the following attributes to support quantization parameter learning.
0 commit comments