Skip to content

Commit 484834b

Browse files
ezyangSacha Refshauge
authored and
Sacha Refshauge
committed
Declare NamedTuple at top level (pytorch#53273)
Summary: Pull Request resolved: pytorch#53273 This prevents a mypy bug. Fixes pytorch#53272 Signed-off-by: Edward Z. Yang <[email protected]> Test Plan: Imported from OSS Reviewed By: rohan-varma Differential Revision: D26819428 Pulled By: ezyang fbshipit-source-id: e71575ed13321665a976cc5ef8b2993c00626b7d
1 parent 8aaf3f1 commit 484834b

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

torch/testing/_internal/distributed/distributed_test.py

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -126,6 +126,12 @@ def get_profiling_event(postfix, profiler):
126126
ddp_outputs_not_used_in_loss_str = "`forward` function outputs participate in calculating loss"
127127

128128

129+
class DDPUnevenTestInput(NamedTuple):
130+
name: str
131+
model: nn.Module
132+
inp: Union[torch.tensor, tuple]
133+
sync_interval: int
134+
129135

130136
class _FC2(nn.Module):
131137
def __init__(self):
@@ -4090,12 +4096,6 @@ def _run_uneven_inputs_test(
40904096
@require_backends_available({"gloo", "nccl"})
40914097
@skip_if_lt_x_gpu(2)
40924098
def test_ddp_uneven_inputs(self):
4093-
class DDPUnevenTestInput(NamedTuple):
4094-
name: str
4095-
model: nn.Module
4096-
inp: Union[torch.tensor, tuple]
4097-
sync_interval: int
4098-
40994099
dim = 1000
41004100
batch = 1
41014101
# Create a variety of models to run uneven input tests on.

0 commit comments

Comments
 (0)