We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
1 parent 4cdbb8d commit eb6ad2bCopy full SHA for eb6ad2b
src/sparseml/modifiers/quantization/gptq/base.py
@@ -50,6 +50,7 @@ class GPTQModifier(Modifier):
50
- LayerCompressor.revert_layer_wrappers()
51
52
53
+ :param actorder: Whether to use activation reordering or not
54
:param sequential_update: Whether or not to update weights sequentially by layer,
55
True saves on GPU memory
56
:param targets: list of layer names to compress during GPTQ, or '__ALL__'
0 commit comments