We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
1 parent bce7ea8 commit b9fce42Copy full SHA for b9fce42
tests/integration/test_lists/test-db/l0_b200.yml
@@ -33,6 +33,7 @@ l0_b200:
33
- test_e2e.py::test_ptp_quickstart_advanced[Llama3.1-8B-NVFP4-nvfp4-quantized/Meta-Llama-3.1-8B]
34
- test_e2e.py::test_ptp_quickstart_advanced[Llama3.1-8B-FP8-llama-3.1-model/Llama-3.1-8B-Instruct-FP8]
35
- test_e2e.py::test_ptq_quickstart_advanced_mtp[DeepSeek-V3-Lite-BF16-DeepSeek-V3-Lite/bf16]
36
+ - test_e2e.py::test_ptp_quickstart_advanced_mixed_precision
37
- test_e2e.py::test_ptp_quickstart_advanced_eagle3[Llama-3.1-8b-Instruct-llama-3.1-model/Llama-3.1-8B-Instruct-EAGLE3-LLaMA3.1-Instruct-8B]
38
- test_e2e.py::test_trtllm_bench_pytorch_backend_sanity[meta-llama/Llama-3.1-8B-llama-3.1-8b-False-False]
39
- unittest/_torch -k "not (modeling or multi_gpu or auto_deploy)"
0 commit comments