Skip to content

OnnxTransform -- Update to OnnxRuntime 0.2.0 #2085

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 21 commits into from
Jan 31, 2019
Merged
Show file tree
Hide file tree
Changes from 17 commits
Commits
Show all changes
21 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion build/Dependencies.props
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
<PropertyGroup>
<GoogleProtobufPackageVersion>3.5.1</GoogleProtobufPackageVersion>
<LightGBMPackageVersion>2.2.1.1</LightGBMPackageVersion>
<MicrosoftMLOnnxRuntimeGpuPackageVersion>0.1.5</MicrosoftMLOnnxRuntimeGpuPackageVersion>
<MicrosoftMLOnnxRuntimePackageVersion>0.2.0</MicrosoftMLOnnxRuntimePackageVersion>
<MlNetMklDepsPackageVersion>0.0.0.7</MlNetMklDepsPackageVersion>
<ParquetDotNetPackageVersion>2.1.3</ParquetDotNetPackageVersion>
<SystemDrawingCommonPackageVersion>4.5.0</SystemDrawingCommonPackageVersion>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@

<ItemGroup>
<ProjectReference Include="../Microsoft.ML/Microsoft.ML.nupkgproj" />
<PackageReference Include="Microsoft.ML.OnnxRuntime.Gpu" Version="$(MicrosoftMLOnnxRuntimeGpuPackageVersion)"/>
<PackageReference Include="Microsoft.ML.OnnxRuntime" Version="$(MicrosoftMLOnnxRuntimePackageVersion)"/>
Copy link
Member

@eerhardt eerhardt Jan 29, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why aren't we using the GPU package anymore? #Resolved

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The GPU package will be available on Nuget.Org. In particular for Linux, CPU execution is blocked if the libraries are not available.
Using the CPU package will enable Linux testing as well, if an Ubuntu CI leg is available in the future.


In reply to: 251941810 [](ancestors = 251941810)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’m not understanding this. Will ML.NET use the GPU package or not? Why did we switch to the GPU package before, only to switch back off the GPU package now?

Copy link
Contributor Author

@jignparm jignparm Jan 30, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The primary reason is that we should depend on a package which will run on multiple platforms without additional requirements.The OnnxRuntime GPU package won't run on Linux without the CUDA libraries installed, even if a user wants to just run on CPU. To avoid confusion, we should go with a default of a CPU package, which will run cross-platform (Ubuntu Linux at least, and MacOs in future) without requiring any extra user installation. To use the GPU functionality, we can request users to install CUDA and the GPU OnnxRuntime package. When we add MacOS support, the runtime will be available only in the CPU package (there's no GPU runtime for Mac), so long term using the CPU package seems like a better option.

Most other frameworks also have 2 separate packages, and for the GPU package request CUDA to be installed. This solves package size issues, and also simplifies licensing.

See the MXNet GPU installation here --> https://mxnet.incubator.apache.org/versions/master/install/windows_setup.html#install-with-gpus.
And Theano as well --> http://deeplearning.net/software/theano/install_windows.html.

The OnnxTransform's XML documentation currently says to use the Microsoft.ML.OnnxRuntime.Gpu package if users want to run on GPU. Currently they'll need to rebuild MLNET from source to do this. We can add an optional OnnxTransform-Gpu as a separate transform, which uses the Gpu package. To test for it properly however, we'd need to add a GPU CI leg.


In reply to: 252102461 [](ancestors = 252102461)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, this works for now. I agree it is best to have our "default" work in as many places as possible.

documentation currently says to use the Microsoft.ML.OnnxRuntime.Gpu package if users want to run on GPU. Currently they'll need to rebuild MLNET from source to do this.

We typically don't take this route in .NET libraries*. Instead we typically provide all options in binary form, i.e. add an optional OnnxTransform-Gpu separate package/transform.

(*) Reasons for this:

  1. It means the files are no longer signed by Microsoft.
  2. It messes with dependencies, because now if someone wanted to ship a package that depended on the OnnxTransform.Gpu package, they would need to redistribute it themselves. And if 2 people wanted to do it, they would have separate packages.

</ItemGroup>

</Project>
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
<ItemGroup>
<ProjectReference Include="..\Microsoft.ML.Core\Microsoft.ML.Core.csproj" />
<ProjectReference Include="..\Microsoft.ML.Data\Microsoft.ML.Data.csproj" />
<PackageReference Include="Microsoft.ML.OnnxRuntime.Gpu" Version="$(MicrosoftMLOnnxRuntimeGpuPackageVersion)" />
<PackageReference Include="Microsoft.ML.OnnxRuntime" Version="$(MicrosoftMLOnnxRuntimePackageVersion)" />
</ItemGroup>

</Project>
19 changes: 11 additions & 8 deletions src/Microsoft.ML.OnnxTransform/OnnxTransform.cs
Original file line number Diff line number Diff line change
Expand Up @@ -44,16 +44,19 @@ namespace Microsoft.ML.Transforms
/// </summary>
/// <remarks>
/// <p>Supports inferencing of models in ONNX 1.2 and 1.3 format (opset 7, 8 and 9), using the
/// <a href='https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.Gpu/'>Microsoft.ML.OnnxRuntime.Gpu</a> library.
/// <a href='https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime/'>Microsoft.ML.OnnxRuntime</a> library.
/// </p>
/// <p>Models are scored on CPU by default. If GPU execution is needed (optional), install
/// <a href='https://developer.nvidia.com/cuda-downloads'>CUDA 10.0 Toolkit</a>
/// <p>Models are scored on CPU by default. If GPU execution is needed (optional), use the
/// NuGet package available at
/// <a href='https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.Gpu/'>Microsoft.ML.OnnxRuntime.Gpu</a>
/// and download
/// <a href='https://developer.nvidia.com/cuda-downloads'>CUDA 9.1 Toolkit</a>
/// and
/// <a href='https://developer.nvidia.com/cudnn'>cuDNN</a>
/// , and set the parameter 'gpuDeviceId' to a valid non-negative integer. Typical device ID values are 0 or 1.
/// <a href='https://developer.nvidia.com/cudnn'>cuDNN</a>.
/// Set parameter 'gpuDeviceId' to a valid non-negative integer. Typical device ID values are 0 or 1.
/// </p>
/// <p>The inputs and outputs of the ONNX models must be Tensor type. Sequence and Maps are not yet supported.</p>
/// <p>OnnxRuntime currently works on Windows 64-bit platforms only. Linux and OSX to be supported soon.</p>
/// <p>OnnxRuntime currently works on Windows and Ubuntu 16.04 Linux 64-bit platforms. Mac OS to be supported soon.</p>
/// <p>Visit https://github.com/onnx/models to see a list of readily available models to get started with.</p>
/// <p>Refer to http://onnx.ai' for more information about ONNX.</p>
/// </remarks>
Expand All @@ -70,10 +73,10 @@ public sealed class Arguments : TransformInputBase
[Argument(ArgumentType.Multiple | ArgumentType.Required, HelpText = "Name of the output column.", SortOrder = 2)]
public string[] OutputColumns;

[Argument(ArgumentType.AtMostOnce | ArgumentType.Required, HelpText = "GPU device id to run on (e.g. 0,1,..). Null for CPU. Requires CUDA 10.0.", SortOrder = 3)]
[Argument(ArgumentType.AtMostOnce, HelpText = "GPU device id to run on (e.g. 0,1,..). Null for CPU. Requires CUDA 9.1.", SortOrder = 3)]
public int? GpuDeviceId = null;

[Argument(ArgumentType.AtMostOnce | ArgumentType.Required, HelpText = "If true, resumes execution on CPU upon GPU error. If false, will raise the GPU execption.", SortOrder = 4)]
[Argument(ArgumentType.AtMostOnce, HelpText = "If true, resumes execution on CPU upon GPU error. If false, will raise the GPU execption.", SortOrder = 4)]
public bool FallbackToCpu = false;
}

Expand Down
5 changes: 3 additions & 2 deletions src/Microsoft.ML.OnnxTransform/OnnxUtils.cs
Original file line number Diff line number Diff line change
Expand Up @@ -82,11 +82,12 @@ public OnnxModel(string modelFile, int? gpuDeviceId = null, bool fallbackToCpu =
{
_modelFile = modelFile;

if (gpuDeviceId.HasValue)
if (gpuDeviceId != null)
{
try
{
_session = new InferenceSession(modelFile, SessionOptions.MakeSessionOptionWithCudaProvider(gpuDeviceId.Value));
_session = new InferenceSession(modelFile,
SessionOptions.MakeSessionOptionWithCudaProvider(gpuDeviceId.Value));
}
catch (OnnxRuntimeException)
{
Expand Down
12 changes: 7 additions & 5 deletions test/Microsoft.ML.OnnxTransformTest/DnnImageFeaturizerTest.cs
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ private float[] GetSampleArrayData()
{
var samplevector = new float[inputSize];
for (int i = 0; i < inputSize; i++)
samplevector[i] = (i / ((float) inputSize));
samplevector[i] = (i / ((float)inputSize));
return samplevector;
}

Expand All @@ -61,9 +61,11 @@ public DnnImageFeaturizerTests(ITestOutputHelper helper) : base(helper)
[ConditionalFact(typeof(Environment), nameof(Environment.Is64BitProcess))]
void TestDnnImageFeaturizer()
{
// Onnxruntime supports Ubuntu 16.04, but not CentOS
// Do not execute on CentOS image
if (!RuntimeInformation.IsOSPlatform(OSPlatform.Windows))
return;


var samplevector = GetSampleArrayData();

Expand Down Expand Up @@ -112,7 +114,7 @@ public void OnnxStatic()
imagePath: ctx.LoadText(0),
name: ctx.LoadText(1)))
.Read(dataFile);

var pipe = data.MakeNewEstimator()
.Append(row => (
row.name,
Expand Down Expand Up @@ -144,7 +146,7 @@ public void TestOldSavingAndLoading()
{
if (!RuntimeInformation.IsOSPlatform(OSPlatform.Windows))
return;


var samplevector = GetSampleArrayData();

Expand All @@ -158,7 +160,7 @@ public void TestOldSavingAndLoading()

var inputNames = "data_0";
var outputNames = "output_1";
var est = new DnnImageFeaturizerEstimator(Env, outputNames, m => m.ModelSelector.ResNet18(m.Environment, m.OutputColumn ,m.InputColumn), inputNames);
var est = new DnnImageFeaturizerEstimator(Env, outputNames, m => m.ModelSelector.ResNet18(m.Environment, m.OutputColumn, m.InputColumn), inputNames);
var transformer = est.Fit(dataView);
var result = transformer.Transform(dataView);
var resultRoles = new RoleMappedData(result);
Expand Down
62 changes: 29 additions & 33 deletions test/Microsoft.ML.OnnxTransformTest/OnnxTransformTests.cs
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,26 @@

namespace Microsoft.ML.Tests
{
public class OnnxTransformTests : TestDataPipeBase

/// <summary>
/// A Fact attribute for Onnx unit tests. Onnxruntime only supported
/// on Windows, Linux (Ubuntu 16.04) and 64-bit platforms.
/// </summary>
public class OnnxFact : FactAttribute
{
public OnnxFact()
{
if (RuntimeInformation.IsOSPlatform(OSPlatform.Linux) ||
RuntimeInformation.IsOSPlatform(OSPlatform.OSX) ||
!Environment.Is64BitProcess)
{
Skip = "Require 64 bit and Windows or Linux (Ubuntu 16.04).";
}
}
}

public class OnnxTransformTests : TestDataPipeBase
{
private const int inputSize = 150528;

private class TestData
Expand Down Expand Up @@ -83,16 +100,11 @@ public OnnxTransformTests(ITestOutputHelper output) : base(output)
{
}

[ConditionalFact(typeof(Environment), nameof(Environment.Is64BitProcess))] // x86 fails with "An attempt was made to load a program with an incorrect format."
[OnnxFact]
void TestSimpleCase()
{
if (!RuntimeInformation.IsOSPlatform(OSPlatform.Windows))
return;

var modelFile = "squeezenet/00000001/model.onnx";

var samplevector = GetSampleArrayData();

var dataView = ML.Data.ReadFromEnumerable(
new TestData[] {
new TestData()
Expand Down Expand Up @@ -126,7 +138,8 @@ void TestSimpleCase()
catch (InvalidOperationException) { }
}

[ConditionalTheory(typeof(Environment), nameof(Environment.Is64BitProcess))] // x86 fails with "An attempt was made to load a program with an incorrect format."
// x86 not supported
[ConditionalTheory(typeof(Environment), nameof(Environment.Is64BitProcess))]
[InlineData(null, false)]
[InlineData(null, true)]
void TestOldSavingAndLoading(int? gpuDeviceId, bool fallbackToCpu)
Expand All @@ -135,7 +148,6 @@ void TestOldSavingAndLoading(int? gpuDeviceId, bool fallbackToCpu)
return;

var modelFile = "squeezenet/00000001/model.onnx";

var samplevector = GetSampleArrayData();

var dataView = ML.Data.ReadFromEnumerable(
Expand Down Expand Up @@ -187,13 +199,10 @@ void TestOldSavingAndLoading(int? gpuDeviceId, bool fallbackToCpu)
}
}

[ConditionalFact(typeof(Environment), nameof(Environment.Is64BitProcess))] // x86 fails with "An attempt was made to load a program with an incorrect format."
[OnnxFact]
public void OnnxStatic()
{
if (!RuntimeInformation.IsOSPlatform(OSPlatform.Windows))
return;

var modelFile = "squeezenet/00000001/model.onnx";
var modelFile = Path.Combine(Directory.GetCurrentDirectory(), "squeezenet", "00000001", "model.onnx");

var env = new MLContext(conc: 1);
var imageHeight = 224;
Expand Down Expand Up @@ -233,23 +242,17 @@ public void OnnxStatic()
}
}

[ConditionalFact(typeof(Environment), nameof(Environment.Is64BitProcess))] // x86 output differs from Baseline
[OnnxFact]
void TestCommandLine()
{
if (!RuntimeInformation.IsOSPlatform(OSPlatform.Windows))
return;

var env = new MLContext();
var x = Maml.Main(new[] { @"showschema loader=Text{col=data_0:R4:0-150527} xf=Onnx{InputColumns={data_0} OutputColumns={softmaxout_1} model={squeezenet/00000001/model.onnx} GpuDeviceId=0 FallbackToCpu=+}" });
var x = Maml.Main(new[] { @"showschema loader=Text{col=data_0:R4:0-150527} xf=Onnx{InputColumns={data_0} OutputColumns={softmaxout_1} model={squeezenet/00000001/model.onnx}}" });
Assert.Equal(0, x);
}

[ConditionalFact(typeof(Environment), nameof(Environment.Is64BitProcess))] // x86 output differs from Baseline
[OnnxFact]
public void OnnxModelScenario()
{
if (!RuntimeInformation.IsOSPlatform(OSPlatform.Windows))
return;

var modelFile = "squeezenet/00000001/model.onnx";
using (var env = new ConsoleEnvironment(seed: 1, conc: 1))
{
Expand Down Expand Up @@ -280,13 +283,10 @@ public void OnnxModelScenario()
}
}

[ConditionalFact(typeof(Environment), nameof(Environment.Is64BitProcess))] // x86 output differs from Baseline
[OnnxFact]
public void OnnxModelMultiInput()
{
if (!RuntimeInformation.IsOSPlatform(OSPlatform.Windows))
return;

var modelFile = @"twoinput\twoinput.onnx";
var modelFile = Path.Combine(Directory.GetCurrentDirectory(), "twoinput", "twoinput.onnx");
using (var env = new ConsoleEnvironment(seed: 1, conc: 1))
{
var samplevector = GetSampleArrayData();
Expand Down Expand Up @@ -323,12 +323,9 @@ public void OnnxModelMultiInput()
}
}

[ConditionalFact(typeof(Environment), nameof(Environment.Is64BitProcess))] // x86 output differs from Baseline
[OnnxFact]
public void TestUnknownDimensions()
{
if (!RuntimeInformation.IsOSPlatform(OSPlatform.Windows))
return;

// model contains -1 in input and output shape dimensions
// model: input dims = [-1, 3], output argmax dims = [-1]
var modelFile = @"unknowndimensions/test_unknowndimensions_float.onnx";
Expand All @@ -350,4 +347,3 @@ public void TestUnknownDimensions()
}
}
}