-
Notifications
You must be signed in to change notification settings - Fork 325
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
QAT model saving bug: Unable to save function b'__inference_separable_conv2d_layer_call_fn_961' #964
Comments
Any updates? |
Hi @gcunhase if you add the quantize_and_dequantize nodes, you will need to create a tf.function that contains both the original function and the graph and save this. For example if you have a model like my_model(tensor), and you want my_model(quantize_and_dequantize(x)), you will need to encapsulate this in a new model (function) to save, as the original model has not been modified. The easiest way to save is
and then you can save this as a saved model, otherwise the node will not be traceable. |
@daverim thank you for your reply. I tried your WAR and I'm still getting the same error:
New code snippet: model = quantize_model(model)
model_save_path = 'weights/sample_qat'
class NewModel(tf.keras.Model):
def __init__(self, my_model):
super(NewModel, self).__init__()
self.my_model = my_model
def call(self, x):
return self.my_model(x)
q_model = NewModel(q_model)
q_model(tf.random.normal(shape=(1, 3, 224, 224)))
q_model.save(model_save_path) Please note that the code snippet I sent when opening this bug works for models with Conv2D, Dense, and many other layers, but fails when I include SeparableConv2D or ConvTranspose2D in the model. It also failed with DethwiseConv2D in |
you need to move quantize_model into the call or the init
```
self.my_model = quantize_model(model)
```
…On Wed, Apr 27, 2022 at 9:45 AM Gwena Cunha ***@***.***> wrote:
@daverim <https://github.com/daverim> thank you for your reply. I tried
your WAR and I'm still getting the same error:
ValueError: Unable to save function b'__inference_sep_conv2d_1_layer_call_fn_1381' because it captures graph tensor Tensor("new_model/toy_separableconv2d_test/quant_sep_conv2d_1/LastValueQuant_1/QuantizeAndDequantizeV4:0", shape=(3, 3, 1, 1), dtype=float32) from a parent function which cannot be converted to a constant with `tf.get_static_value`.
New code snippet:
model = quantize_model(model)model_save_path = 'weights/sample_qat'
class NewModel(tf.keras.Model):
def __init__(self, my_model):
super(NewModel, self).__init__()
self.my_model = my_model
def call(self, x):
return self.my_model(x)
q_model = NewModel(q_model)q_model(tf.random.normal(shape=(1, 3, 224, 224)))q_model.save(model_save_path)
Please note that the code snippet I sent when opening this bug works for
models with Conv2D, Dense, and many other layers, but fails when I include
SeparableConv2D or ConvTranspose2D in the model. It also failed with
DethwiseConv2D in tensorflow < 2.8. Can you think of any reason why?
Tnank you!
—
Reply to this email directly, view it on GitHub
<#964 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AASV4JIER2YMPPMCLAPXPCLVHCE4DANCNFSM5TU2Q5BQ>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Got the same error:
New code snippet: nn_model = _model()
class NewModel(tf.keras.Model):
def __init__(self, my_model):
super(NewModel, self).__init__()
self.my_model = quantize_model(my_model)
def call(self, x):
return self.my_model(x)
q_model = NewModel(nn_model)
q_model(tf.random.normal(shape=(1, 3, 224, 224)))
q_model.save("saved_model_qat") The bug's description contains the full repro. Also, any comment on this? "The code snippet I sent when opening this bug works for models with Conv2D, Dense, and many other layers, but fails when I include SeparableConv2D or ConvTranspose2D in the model. It also failed with DethwiseConv2D in |
Any update on this? |
@daverim I wonder how this very similar issue got solved: #868 (comment) |
Any update? |
1 similar comment
Any update? |
Any update? @daverim |
Any update? |
2 similar comments
Any update? |
Any update? |
Also, i'm not sure what `quantize_model` is doing, but basically, there are
some tensors that are outside the function def that you want to save. The
code I presented encapsulates all the data inside a function.
If you are using QAT api then there is a transform that converts a
separable conv to a conv2d and depthwise conv2d with a
quantize_and_dequantize between the two -- if you are using this
functionality outside of `quantize_apply` then this might not work
correctly, so I highly recommend just using the qat api as directed in the
examples.
…On Wed, Apr 27, 2022 at 10:11 AM David Rim ***@***.***> wrote:
you need to move quantize_model into the call or the init
```
self.my_model = quantize_model(model)
```
On Wed, Apr 27, 2022 at 9:45 AM Gwena Cunha ***@***.***>
wrote:
> @daverim <https://github.com/daverim> thank you for your reply. I tried
> your WAR and I'm still getting the same error:
>
> ValueError: Unable to save function b'__inference_sep_conv2d_1_layer_call_fn_1381' because it captures graph tensor Tensor("new_model/toy_separableconv2d_test/quant_sep_conv2d_1/LastValueQuant_1/QuantizeAndDequantizeV4:0", shape=(3, 3, 1, 1), dtype=float32) from a parent function which cannot be converted to a constant with `tf.get_static_value`.
>
> New code snippet:
>
> model = quantize_model(model)model_save_path = 'weights/sample_qat'
> class NewModel(tf.keras.Model):
> def __init__(self, my_model):
> super(NewModel, self).__init__()
> self.my_model = my_model
>
> def call(self, x):
> return self.my_model(x)
> q_model = NewModel(q_model)q_model(tf.random.normal(shape=(1, 3, 224, 224)))q_model.save(model_save_path)
>
> Please note that the code snippet I sent when opening this bug works for
> models with Conv2D, Dense, and many other layers, but fails when I include
> SeparableConv2D or ConvTranspose2D in the model. It also failed with
> DethwiseConv2D in tensorflow < 2.8. Can you think of any reason why?
> Tnank you!
>
> —
> Reply to this email directly, view it on GitHub
> <#964 (comment)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AASV4JIER2YMPPMCLAPXPCLVHCE4DANCNFSM5TU2Q5BQ>
> .
> You are receiving this because you were mentioned.Message ID:
> ***@***.***>
>
|
Describe the bug
I have a simple model with input layer and a
SeparableConv2D
layer (note that this issue also happens withConv2DTranspose
). I'm quantizing this model by addingquantize_and_dequantize_v2
nodes at the input and weights of separableconv2d layer (commented in the code). However, I am unable to save the model as SavedModel (whereas converting the Keras model directly to ONNX works):System information
tensorflow-gpu==2.8.0
Describe the expected behavior
Keras model can be saved and loaded as SavedModel.
Describe the current behavior
Keras model cannot be saved as SavedModel (loading not able to test since saving is not working).
Code to reproduce the issue
Please download the scripts to reproduce from: https://drive.google.com/file/d/1__EimBaQAXIgNPmYKl99uiBLR8FJe2EN/view?usp=sharing
Additional context
DepthwiseConv2D
intf<2.8.0
, but has been fixed in 2.8.0. Please see the previous issue here: QAT model saving bug : KeyError: '__inference_depthwise_conv2d_layer_call_fn_126 #868 .ConvTranspose2D
.The text was updated successfully, but these errors were encountered: