-
Notifications
You must be signed in to change notification settings - Fork 325
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
QAT model saving bug : KeyError: '__inference_depthwise_conv2d_layer_call_fn_126 #868
Comments
Hi @Xhark , System information TensorFlow version (installed from binary): 2.5.0 => TensorFlow Model Optimization version (installed from binary): 0.6.0 TensorFlow version (installed from binary): 2.5.1 => TensorFlow Model Optimization version (installed from binary): 0.7.0 TensorFlow version (installed from binary): 2.4.0 => TensorFlow Model Optimization version (installed from binary): 0.7.0 Python version: 3.8.12 |
I use the following environment to solve my problem. |
Hi peri044@ and Jia-HongHenryLee@ I'm looking into it now, but there are a couple of workarounds.
I think this is caused by incorrect shape handling for the depthwise kernel quantization parameters, which results in functions not being traced/merged correctly. Thanks for reporting this. |
Thank you @daverim for addressing this. |
Hello @daverim, can you please suggest some pointers for me on how to fix this locally (using saved_model format)? Which files/functions to look at ? Thanks !! |
Hey @peri044. If your ultimate goal is to convert the model into TFLite format you can pass ConcreteFunction around. from_concrete_functions of TFLiteConverter works just fine for me. |
Hello @ChanZou My ultimate goal is to use the saved_model format (if it works) and pass it through TF2ONNX to convert it into ONNX graph. TF2ONNX accepts saved_model format for graphs currently. |
Hello @daverim, any suggestions on how to resolve this would be appreciated. Thanks !! |
Hi sorry for the delay. I just tested your sample code and it seems to be resolved now. There are some warnings about un-traced functions. Using: tf=2.8.0-dev20210930, tfmot=tensorflow_model_optimization=0.7.0 Please try and see if it works for you. |
Thanks @daverim. That works now. |
@daverim I encountered the same error log for SeparableConv2D using TF 2.8.0 (no error with DepthwiseConv2D in that TF version):
Do you have any idea what caused the error in DepthwiseConv2D and if the same fix would work for SeparableConv2D? |
The best way to avoid this issue is to disable the layer tracing when creating the SavedModel, but you'll have to manually define the
|
Hi @k-w-w thank you for your feedback! This specific issue (for DepthwiseConv) has been solved, as mentioned in a comment on Jan 26th above, but the same issue persists for SeparableConv here. I tried your suggestion, but it did not solve my issue, since the problem is not with |
@gcunhase Are you getting the same error even with |
@k-w-w yes |
@gcunhase can you paste the error trace? |
@k-w-w :
|
This bug also has the reproducible code, so we can move our discussion there if you agree. |
This bug can be closed for |
Describe the bug
Please download the scripts to reproduce from : https://drive.google.com/drive/folders/15cajAZ9sAZ2Uyix8sDVSYku6QCqDCec7?usp=sharing
Command to run : python sample_qat.py.
I have a simple model with input layer and a depthwise conv2d layer. I quantize this model by adding quantize_and_dequantize nodes at the input of depthwiseconv2d layer (commented in the code). When I save the model and load it back, I see the following
System information
TensorFlow version (installed from source or binary): 2.5 (Tried with 2.6 as well)
TensorFlow Model Optimization version (installed from source or binary):
Saved model loading fails especially for Depthwise convolution. It works fine for regular conv.
The text was updated successfully, but these errors were encountered: