Skip to content

Commit 7fb66eb

Browse files
committed
server : fix test regexes
1 parent 0ae2860 commit 7fb66eb

File tree

2 files changed

+7
-7
lines changed

2 files changed

+7
-7
lines changed

examples/server/tests/features/server.feature

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -37,8 +37,8 @@ Feature: llama.cpp server
3737

3838
Examples: Prompts
3939
| prompt | n_predict | re_content | n_prompt | n_predicted | truncated |
40-
| I believe the meaning of life is | 8 | (read\|going)+ | 18 | 8 | not |
41-
| Write a joke about AI from a very long prompt which will not be truncated | 256 | (princesses\|everyone\|kids\|Anna\|forest)+ | 46 | 64 | not |
40+
| I believe the meaning of life is | 8 | (read\|going\|pretty)+ | 18 | 8 | not |
41+
| Write a joke about AI from a very long prompt which will not be truncated | 256 | (princesses\|everyone\|kids\|Anna\|forest)+ | 45 | 64 | not |
4242

4343
Scenario: Completion prompt truncated
4444
Given a prompt:
@@ -67,8 +67,8 @@ Feature: llama.cpp server
6767

6868
Examples: Prompts
6969
| model | system_prompt | user_prompt | max_tokens | re_content | n_prompt | n_predicted | enable_streaming | truncated |
70-
| llama-2 | Book | What is the best book | 8 | (Here\|what)+ | 77 | 8 | disabled | not |
71-
| codellama70b | You are a coding assistant. | Write the fibonacci function in c++. | 128 | (thanks\|happy\|bird\|Annabyear)+ | -1 | 64 | enabled | |
70+
| llama-2 | Book | What is the best book | 8 | (Here\|what)+ | 76 | 8 | disabled | not |
71+
| codellama70b | You are a coding assistant. | Write the fibonacci function in c++. | 128 | (thanks\|happy\|bird\|fireplace)+ | -1 | 64 | enabled | |
7272

7373

7474
Scenario Outline: OAI Compatibility w/ response format
@@ -84,7 +84,7 @@ Feature: llama.cpp server
8484
| response_format | n_predicted | re_content |
8585
| {"type": "json_object", "schema": {"const": "42"}} | 5 | "42" |
8686
| {"type": "json_object", "schema": {"items": [{"type": "integer"}]}} | 10 | \[ -300 \] |
87-
| {"type": "json_object"} | 10 | \{ " Jacky. |
87+
| {"type": "json_object"} | 10 | \{ " Saragine. |
8888

8989

9090
Scenario: Tokenize / Detokenize

examples/server/tests/features/slotsave.feature

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ Feature: llama.cpp server slot management
2626
# Since we have cache, this should only process the last tokens
2727
Given a user prompt "What is the capital of Germany?"
2828
And a completion request with no api error
29-
Then 24 tokens are predicted matching (Thank|special)
29+
Then 24 tokens are predicted matching (Thank|special|Lily)
3030
And 7 prompt tokens are processed
3131
# Loading the original cache into slot 0,
3232
# we should only be processing 1 prompt token and get the same output
@@ -41,7 +41,7 @@ Feature: llama.cpp server slot management
4141
Given a user prompt "What is the capital of Germany?"
4242
And using slot id 1
4343
And a completion request with no api error
44-
Then 24 tokens are predicted matching (Thank|special)
44+
Then 24 tokens are predicted matching (Thank|special|Lily)
4545
And 1 prompt tokens are processed
4646

4747
Scenario: Erase Slot

0 commit comments

Comments
 (0)