Skip to content

Commit a43e056

Browse files
authored
Fix MoE vs FF (#41)
1 parent 535030a commit a43e056

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

src/transformers/models/llama4/configuration_llama4.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -236,7 +236,7 @@ def __init__(
236236

237237
self.interleave_moe_layer_step = interleave_moe_layer_step
238238
self.moe_layers = (
239-
moe_layers if moe_layers is not None else list(range(0, num_hidden_layers, interleave_moe_layer_step))
239+
moe_layers if moe_layers is not None else list(range(interleave_moe_layer_step-1, num_hidden_layers, interleave_moe_layer_step))
240240
)
241241
self.attention_chunk_size = attention_chunk_size
242242

0 commit comments

Comments
 (0)