Skip to content

Commit fbb3135

Browse files
typo: fix a bunch of typos. (#862)
1 parent 6ec3bae commit fbb3135

File tree

4 files changed

+11
-11
lines changed

4 files changed

+11
-11
lines changed

Diff for: flashinfer/activation.py

+3-3
Original file line numberDiff line numberDiff line change
@@ -104,7 +104,7 @@ def silu_and_mul(input: torch.Tensor, out: torch.Tensor = None) -> torch.Tensor:
104104
Input tensor, shape (..., 2 * hidden_size).
105105
106106
out: Optional[torch.Tensor]
107-
The the output tensor, if specified, the kernel will update this tensor inplace.
107+
The output tensor, if specified, the kernel will update this tensor inplace.
108108
109109
Returns
110110
-------
@@ -139,7 +139,7 @@ def gelu_tanh_and_mul(input: torch.Tensor, out: torch.Tensor = None) -> torch.Te
139139
Input tensor, shape (..., 2 * hidden_size).
140140
141141
out: Optional[torch.Tensor]
142-
The the output tensor, if specified, the kernel will update this tensor inplace.
142+
The output tensor, if specified, the kernel will update this tensor inplace.
143143
144144
Returns
145145
-------
@@ -171,7 +171,7 @@ def gelu_and_mul(input: torch.Tensor, out: torch.Tensor = None) -> torch.Tensor:
171171
Input tensor, shape (..., 2 * hidden_size).
172172
173173
out: Optional[torch.Tensor]
174-
The the output tensor, if specified, the kernel will update this tensor inplace.
174+
The output tensor, if specified, the kernel will update this tensor inplace.
175175
176176
Returns
177177
-------

Diff for: flashinfer/norm.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ def rmsnorm(
6161
eps: float
6262
Epsilon for numerical stability.
6363
out: Optional[torch.Tensor]
64-
The the output tensor, if specified, the kernel will update this tensor inplace.
64+
The output tensor, if specified, the kernel will update this tensor inplace.
6565
6666
Returns
6767
-------
@@ -144,7 +144,7 @@ def gemma_rmsnorm(
144144
eps: float
145145
Epsilon for numerical stability.
146146
out: Optional[torch.Tensor]
147-
The the output tensor, if specified, the kernel will update this tensor inplace.
147+
The output tensor, if specified, the kernel will update this tensor inplace.
148148
149149
Returns
150150
-------

Diff for: flashinfer/page.py

+4-4
Original file line numberDiff line numberDiff line change
@@ -180,9 +180,9 @@ def get_batch_indices_positions(
180180
Returns
181181
-------
182182
batch_indices: torch.Tensor
183-
The batch indices of the each entry in the ragged tensor, shape: ``[nnz]``.
183+
The batch indices of each entry in the ragged tensor, shape: ``[nnz]``.
184184
positions: torch.Tensor
185-
The positions of the each entry in the ragged tensor, shape: ``[nnz]``.
185+
The positions of each entry in the ragged tensor, shape: ``[nnz]``.
186186
187187
Example
188188
-------
@@ -201,7 +201,7 @@ def get_batch_indices_positions(
201201
----
202202
This function is similar to `CSR2COO <https://docs.nvidia.com/cuda/cusparse/#csr2coo>`_
203203
conversion in cuSPARSE library, with the difference that we are converting from a ragged
204-
tensor (which don't require a column indices array) to a COO format.
204+
tensor (which doesn't require a column indices array) to a COO format.
205205
206206
See Also
207207
--------
@@ -349,7 +349,7 @@ def append_paged_kv_cache(
349349
350350
Note
351351
----
352-
The function assumes that the space for appended k/v have already been allocated,
352+
The function assumes that the space for appended k/v has already been allocated,
353353
which means :attr:`kv_indices`, :attr:`kv_indptr`, :attr:`kv_last_page_len` has
354354
incorporated appended k/v.
355355

Diff for: flashinfer/prefill.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -991,7 +991,7 @@ class BatchPrefillWithPagedKVCacheWrapper:
991991
Note
992992
----
993993
To accelerate computation, FlashInfer's batch prefill/append attention operators
994-
creates some auxiliary data structures, these data structures can be reused across
994+
create some auxiliary data structures, these data structures can be reused across
995995
multiple prefill/append attention calls (e.g. different Transformer layers). This
996996
wrapper class manages the lifecycle of these data structures.
997997
"""
@@ -1815,7 +1815,7 @@ class BatchPrefillWithRaggedKVCacheWrapper:
18151815
Note
18161816
----
18171817
To accelerate computation, FlashInfer's batch prefill/append attention operators
1818-
creates some auxiliary data structures, these data structures can be reused across
1818+
create some auxiliary data structures, these data structures can be reused across
18191819
multiple prefill/append attention calls (e.g. different Transformer layers). This
18201820
wrapper class manages the lifecycle of these data structures.
18211821
"""

0 commit comments

Comments
 (0)