Skip to content

Discussion: batching benchmark and improvement #164

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
vansangpfiev opened this issue Jul 29, 2024 · 2 comments · Fixed by #168
Closed

Discussion: batching benchmark and improvement #164

vansangpfiev opened this issue Jul 29, 2024 · 2 comments · Fixed by #168
Assignees

Comments

@vansangpfiev
Copy link
Contributor

Motivation

  • AFAIK, we have already supported batching but do not have a benchmark for it yet. We should review the implementation to see if can do any improvements.

Discussion

Resources

@nguyenhoangthuan99
Copy link
Contributor

nguyenhoangthuan99 commented Jul 30, 2024

Llama.cpp implementation:

Cortex inplementation:

  • Each request(1 prompt/ 1 request) sent to server will be prepared and add to task queue. There is a background process gathers prompts in the task queue, build batch and process batch then push result to output queue.
    => Current implementation of cortex llama.cpp can support batching but need to adjust some params to sync with latest llama.cpp implementation and add doc to Readme.md. Also a benchmark script test run batch to verify implementation is needed.

@nguyenhoangthuan99
Copy link
Contributor

nguyenhoangthuan99 commented Jul 31, 2024

#168

Result when run script in 3090 Linux:
{'message': 'Model already loaded'}

Finished in 27.825968503952026 s
Total token: 6108
Throughput when run parallel: 219.50718441776795 tokens/s
############################
Finished in 38.07835125923157 s
Total token: 4966
Throughput when run in sequence: 130.4153104264477 tokens/s
###########################
--- 70.19260907173157 seconds ---

@nguyenhoangthuan99 nguyenhoangthuan99 linked a pull request Jul 31, 2024 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants