You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Ever since #3228, completion requests to the server example occasionally return a good deal of consecutive colons before a readable response, and sometimes it's almost exclusively colons, for example: {"content": "::::::::::::::::::::::::: Hello, I'm an AI created by ChatBot. How can I assist you today?"} {"content": "::::::::::::::::?"}
I've tested on a range of models (Mythomax 13B, Mythomax Kimiko 13B, Luna 7B, MlewdBoros 13B, Synthia 7B) and get the same results. I can reproduce it by sending this body to the server continually:
{"n_predict":256,"prompt":"Text transcript of a never-ending conversation between User and Assistant.\n\n#User: hi there\n#Assistant:", "stop":["\n#","\nUser:","\nuser:","\n["]}
It does not happen on every response (about 1 in 5-10 responses experience this) but enough to be distracting and make me wonder if I'm doing something wrong. I know the repeat_penalty and logit_bias fields should help here, but they both seem to have no effect on the problem from my testing and also were not previously explicitly needed before the aforementioned PR.
I'm running on an M1 Max chip and writing this as of commit 9f6ede1.
Does anyone have any insights into how I could fix this or if this is perhaps a bug in the server example?
The text was updated successfully, but these errors were encountered:
I compiled the bin with make server from root and ran ./server -m <my_model> and tested via Postman with the equivalent of this curl:
curl --location 'http://localhost:8080/completion' \
--header 'Content-Type: application/json' \
--data '{"n_predict":256,"prompt":"Text transcript of a never-ending conversation between User and Assistant.\n\n#User: hi there\n#Assistant:", "stop":["\n#","\nUser:","\nuser:","\n["]}'
It's not the easiest to reproduce on a consistent basis; sometimes it takes running the above a few dozen times to reproduce the colons.
Thanks so much for the quick response and fix! This project is great and you guys do a wonderful job maintaining and updating it!
Ever since #3228, completion requests to the server example occasionally return a good deal of consecutive colons before a readable response, and sometimes it's almost exclusively colons, for example:
{"content": "::::::::::::::::::::::::: Hello, I'm an AI created by ChatBot. How can I assist you today?"}
{"content": "::::::::::::::::?"}
I've tested on a range of models (Mythomax 13B, Mythomax Kimiko 13B, Luna 7B, MlewdBoros 13B, Synthia 7B) and get the same results. I can reproduce it by sending this body to the server continually:
{"n_predict":256,"prompt":"Text transcript of a never-ending conversation between User and Assistant.\n\n#User: hi there\n#Assistant:", "stop":["\n#","\nUser:","\nuser:","\n["]}
It does not happen on every response (about 1 in 5-10 responses experience this) but enough to be distracting and make me wonder if I'm doing something wrong. I know the
repeat_penalty
andlogit_bias
fields should help here, but they both seem to have no effect on the problem from my testing and also were not previously explicitly needed before the aforementioned PR.I'm running on an M1 Max chip and writing this as of commit 9f6ede1.
Does anyone have any insights into how I could fix this or if this is perhaps a bug in the server example?
The text was updated successfully, but these errors were encountered: