Skip to content

Commit bd31680

Browse files
committed
fixed the missing processor after conversion
1 parent 6af3619 commit bd31680

File tree

3 files changed

+194
-99
lines changed

3 files changed

+194
-99
lines changed

recipes/experimental/long_context/H2O/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ Besides, LLMs usually have poor generation to long sequence during inference. H2
88

99
Current implementation supports llama-1/2/3, from 7B to 70B. Since H2O only maintains the most important KV pairs, it might missing some important information in the middle content for some knowlege-intensive tasks.
1010

11-
More details please refer to Paper: **https://arxiv.org/pdf/2306.14048**; Blog: **https://allenz.work/?p=11**.
11+
More details please refer to Paper: **https://arxiv.org/pdf/2306.14048**;
1212

1313
**Note: this implementation is tested with transformers == 4.39.0**
1414

@@ -21,7 +21,7 @@ python run_summarization.py \
2121
--input-path data/summarization/xsum.jsonl \
2222
--output-path summarization_output/xsum_h2o.jsonl \
2323
--model-name meta-llama/Meta-Llama-3-8B \
24-
--enable_h2o_generation
24+
--enable_h2o_generation
2525
```
2626

2727
##### **Results**

0 commit comments

Comments
 (0)