-
Notifications
You must be signed in to change notification settings - Fork 1.9k
OnnxTransform -- Update to OnnxRuntime 0.2.0 #2085
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from 17 commits
Commits
Show all changes
21 commits
Select commit
Hold shift + click to select a range
ac4aaac
Initial checking to run unit tests on Linux
jignparm 58ec7fd
Merge branch 'master' into jignparm/onnx_linx_support
jignparm 8efafe6
Test linux execution on dev package of onnxruntime
jignparm 683579b
Added fix to onnxruntime nuget prop file to handle Linux dlls correctly
jignparm bde81d3
test package copying all native dlls to output folder
jignparm 753bcaa
Add rpath to dll
jignparm 4fa7a9f
Update to Onnxruntime 2.0 package
jignparm 9e2e5dc
use cpu package
jignparm 2cbfaaa
uncomment device id in unit test
jignparm 54df111
folder PR comments
jignparm b153ad7
merge with master, and fix comments
jignparm 9e223cd
minor fix
jignparm eb7ca45
minor change to kick off build
jignparm 526a088
minor change to kick off build
jignparm 78fc1e8
remove reference to myget, and use nuget instead
jignparm d132374
removed nullgpuID. Use AtMostOnce for gpuDeviceID and fallBackToCpu v…
jignparm f4fe2e2
Folded more PR comments
jignparm c1b7c67
minor change to kick of build status
jignparm 0ac8c0e
minor change to kick off random build failure
jignparm 45ce659
minor change to kick off build
jignparm af38c76
minor change to kick off build
jignparm File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why aren't we using the GPU package anymore? #Resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The GPU package will be available on Nuget.Org. In particular for Linux, CPU execution is blocked if the libraries are not available.
Using the CPU package will enable Linux testing as well, if an Ubuntu CI leg is available in the future.
In reply to: 251941810 [](ancestors = 251941810)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I’m not understanding this. Will ML.NET use the GPU package or not? Why did we switch to the GPU package before, only to switch back off the GPU package now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The primary reason is that we should depend on a package which will run on multiple platforms without additional requirements.The OnnxRuntime GPU package won't run on Linux without the CUDA libraries installed, even if a user wants to just run on CPU. To avoid confusion, we should go with a default of a CPU package, which will run cross-platform (Ubuntu Linux at least, and MacOs in future) without requiring any extra user installation. To use the GPU functionality, we can request users to install CUDA and the GPU OnnxRuntime package. When we add MacOS support, the runtime will be available only in the CPU package (there's no GPU runtime for Mac), so long term using the CPU package seems like a better option.
Most other frameworks also have 2 separate packages, and for the GPU package request CUDA to be installed. This solves package size issues, and also simplifies licensing.
See the MXNet GPU installation here --> https://mxnet.incubator.apache.org/versions/master/install/windows_setup.html#install-with-gpus.
And Theano as well --> http://deeplearning.net/software/theano/install_windows.html.
The OnnxTransform's XML documentation currently says to use the Microsoft.ML.OnnxRuntime.Gpu package if users want to run on GPU. Currently they'll need to rebuild MLNET from source to do this. We can add an optional OnnxTransform-Gpu as a separate transform, which uses the Gpu package. To test for it properly however, we'd need to add a GPU CI leg.
In reply to: 252102461 [](ancestors = 252102461)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, this works for now. I agree it is best to have our "default" work in as many places as possible.
We typically don't take this route in .NET libraries*. Instead we typically provide all options in binary form, i.e. add an optional OnnxTransform-Gpu separate package/transform.
(*) Reasons for this: