Skip to content

Commit 27d04f5

Browse files
tianleiwuTed Themistokleous
authored and
Ted Themistokleous
committed
Adjust test tolerance (microsoft#19947)
### Description Improve the precision of tests. Changes include: (1) Update checkers.cc to use consistent default tolerance. (2) Allow different default tolerances for different providers at runtime (Previously, threshold of a test is decided during compiling). (3) Explicitly set absolute and relative error tolerances for tests that failed to pass new default threshold. #### Default Thresholds Change Note that the formula of testing is `abs(expected - value) < absolute + relative * expected` Default test thresholds when both absolute and relative tolerance are not set: type | provider | absolute (before) | absolute (after) | relative (before) | relative (after) -- | -- | -- | -- | -- | -- double | CPU | 0.001 | 0.00001 | 0 | 0.00001 double | CUDA | 0.005 | 0.00001 | 0 | 0.00001 double | TRT | 0.005 | 0.00001 | 0 | 0.00001 double | ROCM | 0.005 | 0.00001 | 0 | 0.00001 double | DML | 0.005 | 0.00001 | 0 | 0.00001   |   |   |   |   |   float | CPU | 0.0001 | 0.00001 | 0 | 0.0001 float | CUDA | 0.005 | 0.00001 | 0 | 0.0001 float | TRT | 0.005 | 0.00001 | 0 | 0.0001 float | ROCM | 0.005 | 0.00001 | 0 | 0.0001 float | DML | 0.005 | 0.00001 | 0 | 0.0001 float | Training* | 0.005 | 0.001 | 0 | 0.0001   |   |   |   |   |   half | CPU | 0.001 | 0.0025 | 0 | 0.001 half | CUDA | 0.005 | 0.0025 | 0 | 0.001 half | TRT | 0.005 | 0.0025 | 0 | 0.001 half | ROCM | 0.005 | 0.0025 | 0 | 0.001 half | DML | 0.02 | 0.005 | 0 | 0.001 half | Training* | 0.005 | 0.005 | 0 | 0.001   |   |   |   |   |   bfloat16 | CPU | 0.0001 | 0.02 | 0 | 0.01 bfloat16 | CUDA | 0.0001 | 0.02 | 0.05 | 0.01 bfloat16 | TRT | 0.0001 | 0.02 | 0.05 | 0.01 bfloat16 | ROCM | 0.0001 | 0.02 | 0.05 | 0.01 bfloat16 | DML | 0.0001 | 0.02 | 0.05 | 0.01 bfloat16 | Training* | 0.0001 | 0.02 | 0.05 | 0.01 *Training mean a build flag ENABLE_TRAINING_CORE is defined. The provider can be any one. #### Threshold for provider Previously, the threshold might change according to build flags: ``` #if defined(USE_CUDA) || defined(USE_ROCM) || defined(USE_DML) constexpr float threshold = 0.005f; #else constexpr float threshold = 0.0001f; #endif ``` For a cpu only build, the threshold is 0.0001. For a cuda build, the threshold for CPU provider (some tests in cuda build actually run with CPU provider) is changed to 0.005. After this change, the threshold only depends on data type and provider used in the test. It will not change by build flags for non-training builds. Default thresholds for training might be different from inference (please refer to the above table). There are a few factors there: Training has gradient outputs; TF32 is not disabled in training; Some training tests has iterations, and error might accumulate. How to set different thresholds based on these factors could be a future task.
1 parent 7297887 commit 27d04f5

26 files changed

+204
-64
lines changed

onnxruntime/test/contrib_ops/attention_op_test.cc

+9
Original file line numberDiff line numberDiff line change
@@ -227,6 +227,12 @@ static void RunAttentionTest(
227227
tester.AddOptionalInputEdge<int32_t>();
228228
}
229229

230+
if (use_float16) {
231+
tester.SetOutputTolerance(0.005f);
232+
} else {
233+
tester.SetOutputTolerance(0.001f, 0.001f);
234+
}
235+
230236
if (enable_cuda) {
231237
std::vector<std::unique_ptr<IExecutionProvider>> execution_providers;
232238
execution_providers.push_back(DefaultCudaExecutionProvider());
@@ -254,6 +260,9 @@ static void RunAttentionTest(
254260
if (enable_dml) {
255261
std::vector<std::unique_ptr<IExecutionProvider>> execution_providers;
256262
execution_providers.push_back(DefaultDmlExecutionProvider());
263+
if (use_float16) {
264+
tester.SetOutputTolerance(0.02f);
265+
}
257266
tester.Run(OpTester::ExpectResult::kExpectSuccess, "", {}, nullptr, &execution_providers);
258267
}
259268
}

onnxruntime/test/contrib_ops/decoder_attention_op_test.cc

+3-4
Original file line numberDiff line numberDiff line change
@@ -31,10 +31,8 @@ static void RunAttentionTest(
3131
const std::vector<float>* new_value_cache = nullptr,
3232
const std::vector<float>* key_cache = nullptr,
3333
const std::vector<float>* value_cache = nullptr,
34-
const std::initializer_list<bool>* key_padding_mask_data = nullptr,
35-
bool use_float16 = false) {
36-
int min_cuda_architecture = use_float16 ? 530 : 0;
37-
bool enable_cuda = HasCudaEnvironment(min_cuda_architecture);
34+
const std::initializer_list<bool>* key_padding_mask_data = nullptr) {
35+
bool enable_cuda = HasCudaEnvironment(0);
3836
bool enable_rocm = (nullptr != DefaultRocmExecutionProvider().get());
3937
bool enable_cpu = false;
4038

@@ -99,6 +97,7 @@ static void RunAttentionTest(
9997
tester.AddOutput<float>("new_key_cache", output_cache_dims, *new_key_cache);
10098
tester.AddOutput<float>("new_value_cache", output_cache_dims, *new_value_cache);
10199
}
100+
tester.SetOutputTolerance(0.001f, 0.001f);
102101

103102
std::vector<std::unique_ptr<IExecutionProvider>> execution_providers;
104103
if (enable_cuda) {

onnxruntime/test/contrib_ops/decoder_masked_multihead_attention_op_test.cc

+4-2
Original file line numberDiff line numberDiff line change
@@ -754,9 +754,10 @@ TEST(DecoderMaskedSelfAttentionTest, Test_fp32) {
754754

755755
// Output(s)
756756
tester.AddOutput<float>("output", input_dims, output);
757-
758757
tester.AddOutput<float>("present", past_dims, present);
759758

759+
tester.SetOutputTolerance(0.001f, 0.001f);
760+
760761
// Run - Regular kernel execution path
761762
{
762763
std::vector<std::unique_ptr<IExecutionProvider>> execution_providers;
@@ -897,9 +898,10 @@ TEST(DecoderMaskedSelfAttentionTest, Test_fp16) {
897898

898899
// Output(s)
899900
tester.AddOutput<MLFloat16>("output", input_dims, output);
900-
901901
tester.AddOutput<MLFloat16>("present", past_dims, present);
902902

903+
tester.SetOutputTolerance(0.005f);
904+
903905
// Run - Regular kernel execution path
904906
{
905907
std::vector<std::unique_ptr<IExecutionProvider>> execution_providers;

onnxruntime/test/contrib_ops/fft_op_test.cc

+2
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,7 @@ TEST(ContribOpTest, Rfft) {
2525
// Target values conputed using PyTorch torch.fft.rfft(X, dim=-1, norm="backward")
2626
test.AddInput<float>("X", {4, 4}, {0.8129f, 1.3108f, -0.8790f, -1.2046f, 0.1661f, -0.9831f, 0.5879f, 0.4918f, 1.2506f, 0.7244f, -2.6260f, -1.1268f, -1.6885f, 1.0439f, -0.2595f, 1.8780f});
2727
test.AddOutput<float>("Y", {4, 3, 2}, {0.0400f, 0.0000f, 1.6919f, -2.5154f, -0.1722f, 0.0000f, 0.2627f, 0.0000f, -0.4218f, 1.4748f, 1.2454f, 0.0000f, -1.7779f, 0.0000f, 3.8766f, -1.8512f, -0.9730f, 0.0000f, 0.9740f, 0.0000f, -1.4290f, 0.8341f, -4.8699f, 0.0000f});
28+
test.SetOutputTolerance(0.0001f);
2829
test.Run(OpTester::ExpectResult::kExpectSuccess, "", {}, nullptr, &execution_providers);
2930
}
3031

@@ -45,6 +46,7 @@ TEST(ContribOpTest, Irfft) {
4546
test.AddAttribute("normalized", static_cast<int64_t>(0));
4647
test.AddInput<float>("X", {4, 3, 2}, {0.0400f, 0.0000f, 1.6919f, -2.5154f, -0.1722f, 0.0000f, 0.2627f, 0.0000f, -0.4218f, 1.4748f, 1.2454f, 0.0000f, -1.7779f, 0.0000f, 3.8766f, -1.8512f, -0.9730f, 0.0000f, 0.9740f, 0.0000f, -1.4290f, 0.8341f, -4.8699f, 0.0000f});
4748
test.AddOutput<float>("Y", {4, 4}, {0.8129f, 1.3108f, -0.8790f, -1.2046f, 0.1661f, -0.9831f, 0.5879f, 0.4918f, 1.2506f, 0.7244f, -2.6260f, -1.1268f, -1.6885f, 1.0439f, -0.2595f, 1.8780f});
49+
test.SetOutputTolerance(0.0001f);
4850
test.Run(OpTester::ExpectResult::kExpectSuccess, "", {}, nullptr, &execution_providers);
4951
}
5052
} // namespace test

onnxruntime/test/contrib_ops/gemm_fastgelu_op_test.cc

+4-2
Original file line numberDiff line numberDiff line change
@@ -50,6 +50,8 @@ static void RunGemmFastGeluGpuTest(const std::vector<float>& input_data, const s
5050
tester.AddOutput<float>("Y", output_dims, output_data);
5151
}
5252

53+
tester.SetOutputTolerance(use_float16 ? 0.005f : 0.0025f);
54+
5355
tester.Config(run_with_tunable_op)
5456
.RunWithConfig();
5557
}
@@ -154,7 +156,7 @@ TEST(GemmFastGeluTest, GemmFastGeluWithoutBiasFloat16) {
154156

155157
RunGemmFastGeluGpuTest(input_data, weight_data, bias_data, output_data,
156158
input_dims, weight_dims, bias_dims, output_dims,
157-
false);
159+
false, true);
158160
}
159161

160162
TEST(GemmFastGeluTest, GemmFastGeluWithBiasFloat16) {
@@ -189,7 +191,7 @@ TEST(GemmFastGeluTest, GemmFastGeluWithBiasFloat16) {
189191

190192
RunGemmFastGeluGpuTest(input_data, weight_data, bias_data, output_data,
191193
input_dims, weight_dims, bias_dims, output_dims,
192-
true);
194+
true, true);
193195
}
194196

195197
TEST(GemmFastGeluTest, GemmFastGeluWithBias_bfloat16) {

onnxruntime/test/contrib_ops/gridsample_test.cc

+1
Original file line numberDiff line numberDiff line change
@@ -126,6 +126,7 @@ TEST(GridsampleContribOpTest, gridsample_mode_bicubic) {
126126
0.5000f, 0.5000f, 1.0000f, 1.0000f});
127127
test.AddAttribute("mode", "bicubic");
128128
test.AddOutput<float>("Y", {1, 1, 2, 4}, {-0.1406f, 0.3828f, 1.7556f, 2.9688f, 2.9688f, 1.7556f, 5.1445f, 1.3906f});
129+
test.SetOutputTolerance(0.0001f);
129130
test.Run(OpTester::ExpectResult::kExpectSuccess, "", {kCudaNHWCExecutionProvider});
130131
}
131132

onnxruntime/test/contrib_ops/layer_norm_op_test.cc

+6
Original file line numberDiff line numberDiff line change
@@ -160,6 +160,7 @@ TEST(LayerNormTest, LayerNorm_Scale_Bias) {
160160
test.AddInput<float>("gamma", {2}, {-0.6953f, 5.1824f});
161161
test.AddInput<float>("bias", {2}, {0.6435f, -0.3964f});
162162
test.AddOutput<float>("output", dims, {-0.0516f, -5.5776f, -0.0518f, -5.5788f, -0.0518f, -5.5788f});
163+
test.SetOutputTolerance(0.0001f);
163164
test.Run();
164165
}
165166

@@ -172,6 +173,8 @@ TEST(LayerNormTest, LayerNorm_Scale_Bias_Float16Input) {
172173
test.AddInput<float>("gamma", {2}, {-0.6953f, 5.1824f});
173174
test.AddInput<float>("bias", {2}, {0.6435f, -0.3964f});
174175
test.AddOutput<float>("output", dims, {-0.0516f, -5.5776f, -0.0518f, -5.5788f, -0.0518f, -5.5788f});
176+
test.SetOutputTolerance(0.0001f);
177+
175178
// TRT, DNNL, OpenVINO and NNAPI, CoreML don't support this combination of datatypes
176179
test.Run(OpTester::ExpectResult::kExpectSuccess, "",
177180
{kTensorrtExecutionProvider, kDnnlExecutionProvider, kQnnExecutionProvider,
@@ -228,6 +231,9 @@ TEST(LayerNormTest, LayerNorm17_double) {
228231
test.AddInput<double>("x", dims, {1.0, 2.0, 3.0, 4.0, 5.0, 6.0});
229232
test.AddInput<double>("gamma", {3}, {1.0, 1.0, 1.0});
230233
test.AddOutput<double>("output", dims, {-1.2247, 0.0, 1.2247, -1.2247, 0.0, 1.2247});
234+
235+
test.SetOutputTolerance(0.0001f);
236+
231237
// DNNL does not support double
232238
test.Run(OpTester::ExpectResult::kExpectSuccess, "", {kDnnlExecutionProvider});
233239
}

onnxruntime/test/contrib_ops/matmul_integer_to_float_test.cc

+1-1
Original file line numberDiff line numberDiff line change
@@ -127,7 +127,7 @@ void TestMatMulIntegerToFloat(bool is_matrix_b_constant,
127127

128128
if (std::is_same_v<OType, float>) {
129129
test.AddOutput<float>("Y", {M, N}, Y_data);
130-
test.SetOutputAbsErr("Y", 0.0001f);
130+
test.SetOutputAbsErr("Y", 0.001f);
131131
test.SetOutputRelErr("Y", 0.02f);
132132
} else {
133133
test.AddOutput<MLFloat16>("Y", {M, N}, ToFloat16(Y_data));

onnxruntime/test/contrib_ops/moe_test.cc

+2
Original file line numberDiff line numberDiff line change
@@ -47,6 +47,7 @@ static void RunMoETest(
4747
tester.AddInput<MLFloat16>("fc1_experts_bias", fc1_experts_bias_dims, ToFloat16(fc1_experts_bias));
4848
tester.AddInput<MLFloat16>("fc2_experts_bias", fc2_experts_bias_dims, ToFloat16(fc2_experts_bias));
4949
tester.AddOutput<MLFloat16>("output", output_dims, ToFloat16(output_data));
50+
tester.SetOutputTolerance(0.005f);
5051
} else {
5152
tester.AddInput<float>("input", input_dims, input);
5253
tester.AddInput<float>("router_probs", router_probs_dims, router_probs);
@@ -55,6 +56,7 @@ static void RunMoETest(
5556
tester.AddInput<float>("fc1_experts_bias", fc1_experts_bias_dims, fc1_experts_bias);
5657
tester.AddInput<float>("fc2_experts_bias", fc2_experts_bias_dims, fc2_experts_bias);
5758
tester.AddOutput<float>("output", output_dims, output_data);
59+
tester.SetOutputTolerance(0.001f);
5860
}
5961

6062
std::vector<std::unique_ptr<IExecutionProvider>> execution_providers;

onnxruntime/test/contrib_ops/packed_multihead_attention_op_test.cc

+2
Original file line numberDiff line numberDiff line change
@@ -107,6 +107,7 @@ static void RunPackedMultiHeadAttentionTest(
107107
}
108108

109109
tester.AddOutput<MLFloat16>("output", output_dims, ToFloat16(output_data));
110+
tester.SetOutputTolerance(0.005f);
110111
} else {
111112
if (is_packed_qkv) {
112113
tester.AddInput<float>("query", packed_qkv_dims, query_data);
@@ -131,6 +132,7 @@ static void RunPackedMultiHeadAttentionTest(
131132
}
132133

133134
tester.AddOutput<float>("output", output_dims, output_data);
135+
tester.SetOutputTolerance(0.001f, 0.001f);
134136
}
135137

136138
std::vector<std::unique_ptr<IExecutionProvider>> execution_providers;

onnxruntime/test/contrib_ops/quantize_attention_op_test.cc

+2
Original file line numberDiff line numberDiff line change
@@ -90,11 +90,13 @@ void RunQAttention(const std::vector<float>& input_data,
9090
tester.AddInput<MLFloat16>("input_scale", {1}, ToFloat16({input_quant_params.scale}));
9191
tester.AddInput<MLFloat16>("weight_scale", {1}, ToFloat16({weight_quant_params.scale}));
9292
tester.AddOutput<MLFloat16>("output", output_dims, ToFloat16(output_data));
93+
tester.SetOutputTolerance(0.01f);
9394
} else {
9495
tester.AddInput<float>("bias", bias_dims, bias_data);
9596
tester.AddInput<float>("input_scale", {1}, {input_quant_params.scale});
9697
tester.AddInput<float>("weight_scale", {1}, {weight_quant_params.scale});
9798
tester.AddOutput<float>("output", output_dims, output_data);
99+
tester.SetOutputTolerance(0.005f);
98100
}
99101

100102
if (mask_index_data.size() > 0) {

onnxruntime/test/providers/base_tester.cc

+14
Original file line numberDiff line numberDiff line change
@@ -120,6 +120,20 @@ void BaseTester::SetOutputRelErr(const char* name, float v) {
120120
it->validation_params.relative_error = optional<float>(v);
121121
}
122122

123+
void BaseTester::SetOutputTolerance(float abs_error, float rel_error) {
124+
for (auto& output : output_data_) {
125+
if (output.def.Exists()) {
126+
if (abs_error >= 0.0f) {
127+
output.validation_params.absolute_error = optional<float>(abs_error);
128+
}
129+
130+
if (rel_error >= 0.0f) {
131+
output.validation_params.relative_error = optional<float>(rel_error);
132+
}
133+
}
134+
}
135+
}
136+
123137
std::vector<int64_t> BaseTester::GetDimsForProto(gsl::span<const int64_t> dims) {
124138
std::vector<int64_t> dims_for_proto{dims.begin(), dims.end()};
125139
if (add_symbolic_dim_to_tensor_data_ >= 0 &&

onnxruntime/test/providers/base_tester.h

+11
Original file line numberDiff line numberDiff line change
@@ -519,9 +519,20 @@ class BaseTester {
519519
custom_session_registries_.push_back(registry);
520520
}
521521

522+
// For floating types (double/float/half/bfloat16), tolerance is similar to numpy.isclose:
523+
// absolute(expected_value - actual_value) <= abs_error + rel_error * absolute(expected_value)
524+
// For integer types, tolerance parameters are ignored except the following cases:
525+
// For uint8, tolerance is only applied to NNAPI/XNNPACK/DML providers.
526+
// For int8, only abs_error is used, and rel_error is ignored. See checkers.cc for detail.
527+
// If abs_error or rel_error is not set, a default value is used (search DefaultTolerance for detail).
522528
void SetOutputAbsErr(const char* name, float v);
523529
void SetOutputRelErr(const char* name, float v);
524530

531+
// Set absolute and relative tolerance for all existed outputs.
532+
// Negative value will be ignored.
533+
// Note that it will not set tolerance for new outputs added after this call.
534+
void SetOutputTolerance(float abs_error, float rel_error = -1.0f);
535+
525536
// Number of times to call InferenceSession::Run. The same feeds are used each time.
526537
// e.g. used to verify the generator ops behave as expected
527538
void SetNumRunCalls(int n) {

0 commit comments

Comments
 (0)