Skip to content

Commit 738460f

Browse files
authored
typo: "specutate" typo (#934)
1 parent 58e852a commit 738460f

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

flashinfer/sampling.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -1151,7 +1151,7 @@ def chain_speculative_sampling(
11511151
Shape: ``(batch_size, num_speculate_tokens, vocab_size)``
11521152
draft_token_ids: torch.Tensor
11531153
The draft model's generated token indices.
1154-
Shape: ``(batch_size, num_specutate_tokens)``
1154+
Shape: ``(batch_size, num_speculate_tokens)``
11551155
target_probs: torch.Tensor
11561156
Expected to be uniformly distributed in ``[0, 1)``.
11571157
target_probs: torch.Tensor
@@ -1183,7 +1183,7 @@ def chain_speculative_sampling(
11831183
Compared to input :attr:`draft_token_ids`, the output tensor has an additional
11841184
token index at the end for the final token, if all previous tokens are accepted,
11851185
another "bonus" token will be sampled from the target model's probability.
1186-
Shape: (batch_size, num_specutate_tokens + 1)
1186+
Shape: (batch_size, num_speculate_tokens + 1)
11871187
output_accepted_token_num: torch.Tensor
11881188
The number of tokens that can be accepted if each token is considered independently for each request.
11891189
This metric does not consider the fact that rejection sampling will stop at the first token that does not

0 commit comments

Comments
 (0)