-
Notifications
You must be signed in to change notification settings - Fork 1.9k
Help to consume simple linear integration TensorFlow model needed #5487
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi, So I've ran the C# code you have, and I don't think there's any problem with the Where the saved_model.pb you provided has the same nodes: After quickly reading your jupyter notebok, it seems to me you're simply training a simple regression model that maps 1 input number to one output number, so I'm surprised by all the nodes found on your .pb model. More importantly, several of the nodes seem to hold information about training (like all the "Adam" nodes for learning_rate, decay, etc...). So, ML.NET uses Tensorflow.NET to load and run TF models. And Tensorflow.NET requires the TF models to be frozen (link to docs) so I would suggest you trying to freeze the model before saving it. Here's a tutorial on how to freeze keras models: I'd suspect this is necessary to get a meaningful graph that doesn't have information about training, only for inferencing. When you have a clearer graph, then you'd be able to identify which one is your output variable and use that on ML.NET. BTW, in ML.NET you don't need the tensorflow model to have a "Features" or "Prediction/Softmax" node. I'd think you're taking those names from ModelBuilder or one of our samples on tensorflow, but you don't need one. Once you've done the steps above I hope you're able to identify the desired output variable, and its name, and then use that name on ML.NET. Please, let us know if this fixes your problem. |
Hi @antoniovs1029, thank you for the reply, I will try to implement it and will write you back this week. With kind regards Evgeny |
Hi @antoniovs1029, it seems, that the tutorial you provided is outdated, cause I cannot execute all steps while getting the error. `AttributeErrorTraceback (most recent call last) AttributeError: module 'keras.backend' has no attribute 'get_session'` The version they are used is under TF 2. |
Ok, I have created 2 files: pb and pbtxt by using this function: Then I put these files into the project folder. When I try to load the tensorflow model in VS 2019 I get the exception: "Could not find SavedModel .pb or .pbtxt at supplied export directory path: C:\Users<User>\Documents\machinelearning\TensorFlow.Net.Demo\bin\Debug\netcoreapp3.1\Data\regression". The graph can be shown in www.netron.app and looks better as before. All links you have provided cannot be applied since I use the TF v.2 Please help me to solve this problem. Thank you for your help in advance. |
The model seems much clearer now, so I think that not freezing it before was the problem.
I don't think you need to load both the .pb and .pbtxt, you simply need to load the .pb file the same way you were loading it before, but this time the Schema should be much simpler. Are you loading it the same way as before? Can you provide a full stack trace? since I don't know where that error message could be coming from. |
Hi @antoniovs1029, I didn't change anything in code I provided previously to you. The model path just directs to the folder, where the both new frozen_model files are located. Also they are set to "copy if newer" option, so that they are definitly under the provided folder. Here is the StackTrace of the TensorFlowException: UPDATE: I can load now the model, in the previous version I just could provide the folder path, now it must be path + model name. Thzen it works as expected. I will try to configure the pipeline and will report the results or prolems. :D |
Now I am stucked at the pipeline. Can you help me to construct the pipeline please. Thank you in advance. |
To correctly run your tensorflow model you need to do 2 main things.
var pipeline = tensorFlowModel.ScoreTensorFlowModel("Identity", "NumberOfStops")
.Append(mlContext.Transforms.CopyColumns("DeliveryDelay", "Identity"));
public class DeliveryDelay
{
[LoadColumn(0)]
public string Month;
[LoadColumn(1)]
public float ProductId;
[LoadColumn(2, 2), VectorType(1)]
public float[] NumberOfStops;
[LoadColumn(3)]
public float Delay;
} In DeliveryDelayPrediction class: [VectorType(1)]
public float[] DeliveryDelay; Bellow I attach the full Program.cs I used, since I might have changed other things. Notice that I disabled the call to the **Click to see Code**using System; namespace TensorFlow.Net.Demo
} |
Hi @antoniovs1029, thank you for the provided solution. The evaluation function is also important, since I want to demonstrate that the same accuracy is reached as in the Jupyter notebook by using the same model. If it is possible, could you provide the snippet for the evaluation function too. Thank you in advance. |
Hi. Bellow is the code to use the Evaluate method with your tensorflow model. By the way, it seems you're doing this for a demo, I'm just curious: where are you presenting this demo? Thanks. So, to extract the value of the output array of your tensorflow model, simply define the following to be used by the CustomMappingTransformer: public class ExtractInput
{
[ColumnName("DeliveryDelay")]
[VectorType(1)]
public float[] array;
}
public class ExtractOutput
{
[ColumnName("output")]
public float extractedValue;
}
public static void ExtractMapping(ExtractInput input, ExtractOutput output)
{
// The input array is length 1, extract its only value
output.extractedValue = input.array[0];
} and add the CustomMappingTransformer to your pipeline: .Append(mlContext.Transforms.CustomMapping<ExtractInput, ExtractOutput>(ExtractMapping, contractName: null)); Then on the .Evaluate call provide the column names. Notice that the score column is named "output" because that's how I declared the var metrics = mlContext.Regression.Evaluate(predictions, labelColumnName:"Delay", scoreColumnName:"output"); The code above could have been written in multiple ways, please refer to the CustomMapping samples and APIs to know how to use it in the link here. Thanks. Click to see full codeusing System; namespace TensorFlow.Net.Demo
} |
Hi @antoniovs1029 , thank you a lot for your help. I am a little bit surprised, that is it not trivial to implement the so simple use case. Can you provide any tips how and where I can find all the most important stuff like documentation to make all these configuration steps without help. I find a few of examples but not a real documentation. P.S. I have next monday the demonstration of ML basics in my work. We have a very focussed dotnet landscape with microservice architecture, so my task is to evaluate different constellations for the ML tools and frameworks. My idea is to combine the power of the tensorflow with the ML.NET, so that we can fit this apporach to the existing it landscape. I am very excited, how quick I got help from you but have some thoughts about the complexity of the implementation of this approach, since it seems not a very trivial task. Have a nice weekend. |
Great to hear it's all working now, will close this issue then. |
System information
Issue
Please help me by providing the solution that I can solve this problem.
I created a simple Python script in Jupyter Notebook for the data set consisting of 8 rows. Successfuly trained and evaluated the model and then exported it. The model can be loaded but I cannot configure the pipeline to make prediction.
Now I try to consume the model with ML.NET Console dotnet core 3.1 application.
Cannot consume the model, because not able to create the proper pipeline.
Cannot access the properties in GetModelShema as in the microsoft tutorial, because they dont exist. (how can I define them in my model ("Features"m "Prediction/Softmax")) that they are labeled and recognized?
Source code / logs
project&jupyter_notebook.zip
The text was updated successfully, but these errors were encountered: