-
Notifications
You must be signed in to change notification settings - Fork 1.9k
OnnxTransform -- Update to OnnxRuntime 0.2.0 #2085
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov Report
@@ Coverage Diff @@
## master #2085 +/- ##
==========================================
+ Coverage 71.15% 71.16% +<.01%
==========================================
Files 779 780 +1
Lines 140311 140338 +27
Branches 16047 16043 -4
==========================================
+ Hits 99838 99868 +30
+ Misses 36021 36019 -2
+ Partials 4452 4451 -1
|
@@ -7,7 +7,7 @@ | |||
|
|||
<ItemGroup> | |||
<ProjectReference Include="../Microsoft.ML/Microsoft.ML.nupkgproj" /> | |||
<PackageReference Include="Microsoft.ML.OnnxRuntime.Gpu" Version="$(MicrosoftMLOnnxRuntimeGpuPackageVersion)"/> | |||
<PackageReference Include="Microsoft.ML.OnnxRuntime" Version="$(MicrosoftMLOnnxRuntimePackageVersion)"/> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why aren't we using the GPU package anymore? #Resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The GPU package will be available on Nuget.Org. In particular for Linux, CPU execution is blocked if the libraries are not available.
Using the CPU package will enable Linux testing as well, if an Ubuntu CI leg is available in the future.
In reply to: 251941810 [](ancestors = 251941810)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I’m not understanding this. Will ML.NET use the GPU package or not? Why did we switch to the GPU package before, only to switch back off the GPU package now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The primary reason is that we should depend on a package which will run on multiple platforms without additional requirements.The OnnxRuntime GPU package won't run on Linux without the CUDA libraries installed, even if a user wants to just run on CPU. To avoid confusion, we should go with a default of a CPU package, which will run cross-platform (Ubuntu Linux at least, and MacOs in future) without requiring any extra user installation. To use the GPU functionality, we can request users to install CUDA and the GPU OnnxRuntime package. When we add MacOS support, the runtime will be available only in the CPU package (there's no GPU runtime for Mac), so long term using the CPU package seems like a better option.
Most other frameworks also have 2 separate packages, and for the GPU package request CUDA to be installed. This solves package size issues, and also simplifies licensing.
See the MXNet GPU installation here --> https://mxnet.incubator.apache.org/versions/master/install/windows_setup.html#install-with-gpus.
And Theano as well --> http://deeplearning.net/software/theano/install_windows.html.
The OnnxTransform's XML documentation currently says to use the Microsoft.ML.OnnxRuntime.Gpu package if users want to run on GPU. Currently they'll need to rebuild MLNET from source to do this. We can add an optional OnnxTransform-Gpu as a separate transform, which uses the Gpu package. To test for it properly however, we'd need to add a GPU CI leg.
In reply to: 252102461 [](ancestors = 252102461)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, this works for now. I agree it is best to have our "default" work in as many places as possible.
documentation currently says to use the Microsoft.ML.OnnxRuntime.Gpu package if users want to run on GPU. Currently they'll need to rebuild MLNET from source to do this.
We typically don't take this route in .NET libraries*. Instead we typically provide all options in binary form, i.e. add an optional OnnxTransform-Gpu separate package/transform.
(*) Reasons for this:
- It means the files are no longer signed by Microsoft.
- It messes with dependencies, because now if someone wanted to ship a package that depended on the OnnxTransform.Gpu package, they would need to redistribute it themselves. And if 2 people wanted to do it, they would have separate packages.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My main concern right now is the introduction of the sentinel value NullGpuID
, when we already have a value that will work just fine - null
. I'd like to see that addressed before merging.
My concern about the tests can be fixed in a subsequent PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a couple things to clean up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add Linux support for OnnxTransform.
Fixes #2056
Fixes #2106
Fixes #2149