Skip to content

Commit a9877a0

Browse files
committed
Remove one line from doc
Signed-off-by: Hui Gao <[email protected]>
1 parent d720726 commit a9877a0

File tree

1 file changed

+0
-1
lines changed

1 file changed

+0
-1
lines changed

docs/source/torch/attention.md

-1
Original file line numberDiff line numberDiff line change
@@ -65,7 +65,6 @@ It contains the following predefined fields:
6565
| request_ids | List[int] | The request ID of each sequence in the batch. |
6666
| prompt_lens | List[int] | The prompt length of each sequence in the batch. |
6767
| kv_cache_params | KVCacheParams | The parameters for the KV cache. |
68-
| is_dummy_attention | bool | Indicates whether this is a simulation-only attention operation used for KV cache memory estimation. Defaults to False. |
6968

7069
During `AttentionMetadata.__init__`, you can initialize additional fields for the new attention metadata.
7170
For example, the Flashinfer metadata initializes `decode_wrapper` here.

0 commit comments

Comments
 (0)