You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+12Lines changed: 12 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -205,6 +205,18 @@ TorchTune provides well-tested components with a high-bar on correctness. The li
205
205
206
206
207
207
208
+
## Acknowledgements
209
+
210
+
The Llama2 code in this repository is inspired by the original [Llama2 code](https://github.com/meta-llama/llama/blob/main/llama/model.py). We'd also like to give a huge shoutout to some awesome libraries and tools in the ecosystems!
- Hugging Face for the [Datasets Repository](https://github.com/huggingface/datasets)
214
+
-[gpt-fast](https://github.com/pytorch-labs/gpt-fast) for performant LLM inference techniques which we've adopted OOTB
215
+
-[lit-gpt](https://github.com/Lightning-AI/litgpt), [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl)[transformers](https://github.com/huggingface/transformers) and [llama recipes](https://github.com/meta-llama/llama-recipes) for reference implementations and pushing forward the LLM finetuning community
0 commit comments