Skip to content

[DONT MERGE] PR to debug CI failures on windows #6195

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

2 changes: 1 addition & 1 deletion .circleci/config.yml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion .circleci/config.yml.in
Original file line number Diff line number Diff line change
Expand Up @@ -146,7 +146,7 @@ commands:
default: true
steps:
- pip_install:
args: --pre torch --extra-index-url https://download.pytorch.org/whl/nightly/cpu
args: --pre torch==1.13.0.dev20220618 --extra-index-url https://download.pytorch.org/whl/nightly/cpu
descr: Install PyTorch from nightly releases
- pip_install:
args: --no-build-isolation <<# parameters.editable >> --editable <</ parameters.editable >> .
Expand Down
3 changes: 2 additions & 1 deletion .circleci/unittest/windows/scripts/install.sh
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,8 @@ else
fi

printf "Installing PyTorch with %s\n" "${cudatoolkit}"
conda install -y -c "pytorch-${UPLOAD_CHANNEL}" -c nvidia "pytorch-${UPLOAD_CHANNEL}"::pytorch[build="*${version}*"] "${cudatoolkit}"
# conda install -y -c "pytorch-${UPLOAD_CHANNEL}" -c nvidia "pytorch-${UPLOAD_CHANNEL}"::pytorch[build="*${version}*"] "${cudatoolkit}"
pip install --pre torch==1.13.0.dev20220618+cpu --extra-index-url https://download.pytorch.org/whl/nightly/cpu

torch_cuda=$(python -c "import torch; print(torch.cuda.is_available())")
echo torch.cuda.is_available is $torch_cuda
Expand Down
12 changes: 9 additions & 3 deletions test/test_models.py
Original file line number Diff line number Diff line change
Expand Up @@ -603,8 +603,8 @@ def test_classification_model(model_fn, dev):
"input_shape": (1, 3, 224, 224),
}
model_name = model_fn.__name__
if dev == "cuda" and SKIP_BIG_MODEL and model_name in skipped_big_models:
pytest.skip("Skipped to reduce memory usage. Set env var SKIP_BIG_MODEL=0 to enable test for this model")
# if dev == "cuda" and SKIP_BIG_MODEL and model_name in skipped_big_models:
# pytest.skip("Skipped to reduce memory usage. Set env var SKIP_BIG_MODEL=0 to enable test for this model")
kwargs = {**defaults, **_model_params.get(model_name, {})}
num_classes = kwargs.get("num_classes")
input_shape = kwargs.pop("input_shape")
Expand All @@ -613,9 +613,15 @@ def test_classification_model(model_fn, dev):
model.eval().to(device=dev)
# RNG always on CPU, to ensure x in cuda tests is bitwise identical to x in cpu tests
x = torch.rand(input_shape).to(device=dev)
out = model(x)
with torch.inference_mode():
out = model(x)
_assert_expected(out.cpu(), model_name, prec=0.1)
assert out.shape[-1] == num_classes

if SKIP_BIG_MODEL and model_name in skipped_big_models:
# Skip backprop test only
return

_check_jit_scriptable(model, (x,), unwrapper=script_model_unwrapper.get(model_name, None), eager_out=out)
_check_fx_compatible(model, x, eager_out=out)

Expand Down