Skip to content

model.save fails with ValueError __inference_conv2d_transpose_layer_call_fn_4530 when Conv2DTranspose is quantization aware #115

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
ypsilon-elle opened this issue Mar 2, 2022 · 10 comments
Assignees

Comments

@ypsilon-elle
Copy link

Originally I posted this bug #54753 on tensorflow/tensorflow and was advised to repost it here.

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): yes
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows 10
  • TensorFlow installed from (source or binary): pip install
  • TensorFlow version:
    tf=2.7.0, tensorflow_model_optimization=0.7.1
    tf=2.8.0, tensorflow_model_optimization=0.7.1
    tf-nightly=2.9.0dev20211222, tensorflow_model_optimization=0.7.1
  • Python version: 3.7.12

Describe the problem
We save a quantization-aware keras-model in a .pb model format using model.save(). This operation fails with ValueError: __inference_conv2d_transpose_layer_call_fn_4530 when our model contains a Conv2DTranspose layer.

  • The error is reproducible with tf.keras.models.save_model() too
  • The error is reproducible when we quantize the entire model using tfmot.quantization.keras.quantize_model()
  • The error is also reproducible when we annotate layers using tf.keras.models.clone_model() and apply quantization using tfmot.quantization.keras.quantize_apply(). Our current workaround is to not annotate Conv2DTranspose but this prevents us from having a fully quantization-aware model.
  • The error is reproducible in tf2.7.0, tf2.8.0 and tf-nightly

Saving the same model as .h5 works (unfortunately this workaround is not suitable for us because our technical requirement is to save a .pb-model).

Describe the expected behavior
model.save() saves a QAT model with a Conv2DTranspose layer in a .pb-format successfully.

Standalone code to reproduce the issue
Here are the collabs to reproduce the issue using a very simple model with a Conv2DTranspose layer and two ways to make a model quantization aware mentioned above:
- Collab with tf2.7.0
- Collab with tf2.8.0

Other info / logs
Similar issue #868

Traceback
ValueError                                Traceback (most recent call last)
<ipython-input-7-dc1f93a93afb> in <module>()
      2 annotated_model = tf.keras.models.clone_model(base_model, clone_function=apply_quantization)
      3 q_aware_model = tfmot.quantization.keras.quantize_apply(annotated_model)
----> 4 q_aware_model.save('/output_folder/q_aware_model') # save keras model as .pb, fails

1 frames
/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py in error_handler(*args, **kwargs)
     65     except Exception as e:  # pylint: disable=broad-except
     66       filtered_tb = _process_traceback_frames(e.__traceback__)
---> 67       raise e.with_traceback(filtered_tb) from None
     68     finally:
     69       del filtered_tb

/usr/local/lib/python3.7/dist-packages/tensorflow/python/saved_model/save.py in map_resources(self)
    402           if capture_constant_value is None:
    403             raise ValueError(
--> 404                 f"Unable to save function {concrete_function.name} because it "
    405                 f"captures graph tensor {capture} from a parent function which "
    406                 "cannot be converted to a constant with `tf.get_static_value`.")

ValueError: Unable to save function b'__inference_conv2d_transpose_layer_call_fn_4530' because it captures graph tensor 
Tensor("model/quant_conv2d_transpose/transpose_1:0", shape=(3, 3, 16, 16), dtype=float32) from a parent function which 
cannot be converted to a constant with `tf.get_static_value`.
@sushreebarsa
Copy link
Contributor

@jvishnuvardhan Was able to replicate the issue on colab using TF v2.7.0 and 2.8.0 ,please find the attached gists .Thanks!

@ypsilon-elle
Copy link
Author

Hey @jvishnuvardhan, did you have time to look at this issue? We really would appreciate some feedback on this.

@gowthamkpr
Copy link
Contributor

gowthamkpr commented Apr 20, 2022

This is very similar to this issue. Please take a look. Thanks!

@ypsilon-elle
Copy link
Author

This is very similar to this issue. Conv2Dtranspose is not supported in quantization. Please take a look. Thanks!

@gowthamkpr Thanks for the link, I have checked the issue. The issue might be outdated, because it is from 2020.

To my knowledge Conv2Dtranspose is supported by quantization aware training. A quantization config for Conv2Dtranspose is defined in /default_8bit/default_8bit_quantize_registry.py. In fact, a Conv2Dtranspose layer can be quantized successfully, such a model can be trained with QAT and saved as h5. Please see the attached colabs in the issue description.

However saving a model with q-aware Conv2Dtranspose as .pb fails. So, by the look of it, something goes wrong in model.save().

@fchollet
Copy link
Contributor

fchollet commented May 5, 2022

Saving the same model as .h5 works (unfortunately this workaround is not suitable for us because our technical requirement is to save a .pb-model).

Is it because you want to export the model for inference?

As a workaround you may want to try to use tf.saved_model.save(model, name) and see if that works.

@ypsilon-elle
Copy link
Author

@fchollet I have a technical requirement to save my model in .pb format because this is a format which is used in other parts/modules of the project where I have to pass my model to. Saving my model as .h5 is a temporary workaround and I am hoping for a permanent solution :)

Thanks for you suggestion. Unfortunately the same error is reproducible with tf.keras.models.save_model() too.

@qlzh727
Copy link
Member

qlzh727 commented May 11, 2022

Also adding @abattery from TFMOT team.

@remicres
Copy link

Hi, any news on this?

@pgrommel
Copy link

I am facing the same issue, did anyone find a solution?

@jonGuti13
Copy link

@remicres @pgrommel @ypsilon-elle Did you manage to solve the problem? Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants