-
Notifications
You must be signed in to change notification settings - Fork 39
model.save fails with ValueError __inference_conv2d_transpose_layer_call_fn_4530 when Conv2DTranspose is quantization aware #115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@jvishnuvardhan Was able to replicate the issue on colab using TF v2.7.0 and 2.8.0 ,please find the attached gists .Thanks! |
Hey @jvishnuvardhan, did you have time to look at this issue? We really would appreciate some feedback on this. |
This is very similar to this issue. Please take a look. Thanks! |
@gowthamkpr Thanks for the link, I have checked the issue. The issue might be outdated, because it is from 2020. To my knowledge However saving a model with q-aware |
Is it because you want to export the model for inference? As a workaround you may want to try to use |
@fchollet I have a technical requirement to save my model in .pb format because this is a format which is used in other parts/modules of the project where I have to pass my model to. Saving my model as .h5 is a temporary workaround and I am hoping for a permanent solution :) Thanks for you suggestion. Unfortunately the same error is reproducible with |
Also adding @abattery from TFMOT team. |
Hi, any news on this? |
I am facing the same issue, did anyone find a solution? |
@remicres @pgrommel @ypsilon-elle Did you manage to solve the problem? Thanks! |
Originally I posted this bug #54753 on tensorflow/tensorflow and was advised to repost it here.
System information
tf=2.7.0, tensorflow_model_optimization=0.7.1
tf=2.8.0, tensorflow_model_optimization=0.7.1
tf-nightly=2.9.0dev20211222, tensorflow_model_optimization=0.7.1
Describe the problem
We save a quantization-aware keras-model in a .pb model format using
model.save()
. This operation fails withValueError: __inference_conv2d_transpose_layer_call_fn_4530
when our model contains aConv2DTranspose
layer.tf.keras.models.save_model()
tootfmot.quantization.keras.quantize_model()
tf.keras.models.clone_model()
and apply quantization usingtfmot.quantization.keras.quantize_apply()
. Our current workaround is to not annotateConv2DTranspose
but this prevents us from having a fully quantization-aware model.Saving the same model as .h5 works (unfortunately this workaround is not suitable for us because our technical requirement is to save a .pb-model).
Describe the expected behavior
model.save()
saves a QAT model with aConv2DTranspose
layer in a .pb-format successfully.Standalone code to reproduce the issue
Here are the collabs to reproduce the issue using a very simple model with a
Conv2DTranspose
layer and two ways to make a model quantization aware mentioned above:- Collab with tf2.7.0
- Collab with tf2.8.0
Other info / logs
Similar issue #868
The text was updated successfully, but these errors were encountered: