mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-26 14:20:31 +01:00
beea6e1b16
* llama : save and restore kv cache for single seq id * remove trailing whitespace * respond error in case there's no space in the kv cache * add kv seq save restore to test case * add --slot-save-path arg to enable save restore and restrict save location * Returning 0 for some cases, instead of asserting. * cleanup error cases * rename sequence state functions * rename state get set functions * add previous function names back in with DEPRECATED notice * update doc * adjust endpoints to preferred style * fix restoring zero cell count * handle seq rm return value * unused param * keep in the size check * fix return types * add server test case for slot save restore * cleanup * add cake * cleanup style * add special * removing a whole sequence never fails * move sequence state file functionality from server to llama to match session api and add version tags * catch exceptions on save as well * error log messages * check types for stricter restore * update server doc * readme : update API changes date * strict filename validation * move include, reject bom as well * also reject empty filename * reject whitespace and trailing dot --------- Co-authored-by: Martin Evans <martindevans@gmail.com> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
59 lines
2.4 KiB
Gherkin
59 lines
2.4 KiB
Gherkin
@llama.cpp
|
|
@slotsave
|
|
Feature: llama.cpp server slot management
|
|
|
|
Background: Server startup
|
|
Given a server listening on localhost:8080
|
|
And a model file tinyllamas/stories260K.gguf from HF repo ggml-org/models
|
|
And prompt caching is enabled
|
|
And 2 slots
|
|
And . as slot save path
|
|
And 2048 KV cache size
|
|
And 42 as server seed
|
|
And 24 max tokens to predict
|
|
Then the server is starting
|
|
Then the server is healthy
|
|
|
|
Scenario: Save and Restore Slot
|
|
# First prompt in slot 1 should be fully processed
|
|
Given a user prompt "What is the capital of France?"
|
|
And using slot id 1
|
|
And a completion request with no api error
|
|
Then 24 tokens are predicted matching (Lily|cake)
|
|
And 22 prompt tokens are processed
|
|
When the slot 1 is saved with filename "slot1.bin"
|
|
Then the server responds with status code 200
|
|
# Since we have cache, this should only process the last tokens
|
|
Given a user prompt "What is the capital of Germany?"
|
|
And a completion request with no api error
|
|
Then 24 tokens are predicted matching (Thank|special)
|
|
And 7 prompt tokens are processed
|
|
# Loading the original cache into slot 0,
|
|
# we should only be processing 1 prompt token and get the same output
|
|
When the slot 0 is restored with filename "slot1.bin"
|
|
Then the server responds with status code 200
|
|
Given a user prompt "What is the capital of France?"
|
|
And using slot id 0
|
|
And a completion request with no api error
|
|
Then 24 tokens are predicted matching (Lily|cake)
|
|
And 1 prompt tokens are processed
|
|
# For verification that slot 1 was not corrupted during slot 0 load, same thing
|
|
Given a user prompt "What is the capital of Germany?"
|
|
And using slot id 1
|
|
And a completion request with no api error
|
|
Then 24 tokens are predicted matching (Thank|special)
|
|
And 1 prompt tokens are processed
|
|
|
|
Scenario: Erase Slot
|
|
Given a user prompt "What is the capital of France?"
|
|
And using slot id 1
|
|
And a completion request with no api error
|
|
Then 24 tokens are predicted matching (Lily|cake)
|
|
And 22 prompt tokens are processed
|
|
When the slot 1 is erased
|
|
Then the server responds with status code 200
|
|
Given a user prompt "What is the capital of France?"
|
|
And a completion request with no api error
|
|
Then 24 tokens are predicted matching (Lily|cake)
|
|
And 22 prompt tokens are processed
|