mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-23 21:17:54 +01:00
2002bc96bf
* server : refactoring (wip) * server : remove llava/clip objects from build * server : fix empty prompt handling + all slots idle logic * server : normalize id vars * server : code style * server : simplify model chat template validation * server : code style * server : minor * llama : llama_chat_apply_template support null buf * server : do not process embedding requests when disabled * server : reorganize structs and enums + naming fixes * server : merge oai.hpp in utils.hpp * server : refactor system prompt update at start * server : disable cached prompts with self-extend * server : do not process more than n_batch tokens per iter * server: tests: embeddings use a real embeddings model (#5908) * server, tests : bump batch to fit 1 embedding prompt * server: tests: embeddings fix build type Debug is randomly failing (#5911) * server: tests: embeddings, use different KV Cache size * server: tests: embeddings, fixed prompt do not exceed n_batch, increase embedding timeout, reduce number of concurrent embeddings * server: tests: embeddings, no need to wait for server idle as it can timout * server: refactor: clean up http code (#5912) * server : avoid n_available var ggml-ci * server: refactor: better http codes * server : simplify json parsing + add comment about t_last * server : rename server structs * server : allow to override FQDN in tests ggml-ci * server : add comments --------- Co-authored-by: Pierrick Hymbert <pierrick.hymbert@gmail.com>
64 lines
2.5 KiB
Gherkin
64 lines
2.5 KiB
Gherkin
@llama.cpp
|
|
@server
|
|
Feature: llama.cpp server
|
|
|
|
Background: Server startup
|
|
Given a server listening on localhost:8080
|
|
And a model file tinyllamas/stories260K.gguf from HF repo ggml-org/models
|
|
And a model alias tinyllama-2
|
|
And 42 as server seed
|
|
# KV Cache corresponds to the total amount of tokens
|
|
# that can be stored across all independent sequences: #4130
|
|
# see --ctx-size and #5568
|
|
And 32 KV cache size
|
|
And 512 as batch size
|
|
And 1 slots
|
|
And embeddings extraction
|
|
And 32 server max tokens to predict
|
|
And prometheus compatible metrics exposed
|
|
Then the server is starting
|
|
Then the server is healthy
|
|
|
|
Scenario: Health
|
|
Then the server is ready
|
|
And all slots are idle
|
|
|
|
Scenario Outline: Completion
|
|
Given a prompt <prompt>
|
|
And <n_predict> max tokens to predict
|
|
And a completion request with no api error
|
|
Then <n_predicted> tokens are predicted matching <re_content>
|
|
And prometheus metrics are exposed
|
|
|
|
Examples: Prompts
|
|
| prompt | n_predict | re_content | n_predicted |
|
|
| I believe the meaning of life is | 8 | (read\|going)+ | 8 |
|
|
| Write a joke about AI | 64 | (park\|friends\|scared\|always)+ | 32 |
|
|
|
|
Scenario Outline: OAI Compatibility
|
|
Given a model <model>
|
|
And a system prompt <system_prompt>
|
|
And a user prompt <user_prompt>
|
|
And <max_tokens> max tokens to predict
|
|
And streaming is <enable_streaming>
|
|
Given an OAI compatible chat completions request with no api error
|
|
Then <n_predicted> tokens are predicted matching <re_content>
|
|
|
|
Examples: Prompts
|
|
| model | system_prompt | user_prompt | max_tokens | re_content | n_predicted | enable_streaming |
|
|
| llama-2 | Book | What is the best book | 8 | (Mom\|what)+ | 8 | disabled |
|
|
| codellama70b | You are a coding assistant. | Write the fibonacci function in c++. | 64 | (thanks\|happy\|bird)+ | 32 | enabled |
|
|
|
|
Scenario: Tokenize / Detokenize
|
|
When tokenizing:
|
|
"""
|
|
What is the capital of France ?
|
|
"""
|
|
Then tokens can be detokenize
|
|
|
|
Scenario: Models available
|
|
Given available models
|
|
Then 1 models are supported
|
|
Then model 0 is identified by tinyllama-2
|
|
Then model 0 is trained on 128 tokens context
|