llama.cpp/examples/server/tests
Xuan Son Nguyen 57bb2c40cd
server : fix logprobs, make it OAI-compatible (#10783)
* server : fix logprobs, make it openai-compatible

* update docs

* add std::log

* return pre-sampling p

* sort before apply softmax

* add comment

* fix test

* set p for sampled token

* update docs

* add --multi-token-probs

* update docs

* add `post_sampling_probs` option

* update docs [no ci]

* remove --multi-token-probs

* "top_probs" with "post_sampling_probs"

* resolve review comments

* rename struct token_prob to prob_info

* correct comment placement

* fix setting prob for sampled token
2024-12-19 15:40:08 +01:00
..
unit server : fix logprobs, make it OAI-compatible (#10783) 2024-12-19 15:40:08 +01:00
.gitignore server : replace behave with pytest (#10416) 2024-11-26 16:20:18 +01:00
conftest.py server : replace behave with pytest (#10416) 2024-11-26 16:20:18 +01:00
README.md server : (refactoring) do not rely on JSON internally (#10643) 2024-12-06 11:14:32 +01:00
requirements.txt server : (tests) don't use thread for capturing stdout/stderr, bump openai client library (#10568) 2024-11-28 19:17:49 +01:00
tests.sh server : (refactoring) do not rely on JSON internally (#10643) 2024-12-06 11:14:32 +01:00
utils.py server : output embeddings for all tokens when pooling = none (#10861) 2024-12-18 13:01:41 +02:00

Server tests

Python based server tests scenario using pytest.

Tests target GitHub workflows job runners with 4 vCPU.

Note: If the host architecture inference speed is faster than GitHub runners one, parallel scenario may randomly fail. To mitigate it, you can increase values in n_predict, kv_size.

Install dependencies

pip install -r requirements.txt

Run tests

  1. Build the server
cd ../../..
cmake -B build -DLLAMA_CURL=ON
cmake --build build --target llama-server
  1. Start the test: ./tests.sh

It's possible to override some scenario steps values with environment variables:

variable description
PORT context.server_port to set the listening port of the server during scenario, default: 8080
LLAMA_SERVER_BIN_PATH to change the server binary path, default: ../../../build/bin/llama-server
DEBUG to enable steps and server verbose mode --verbose
N_GPU_LAYERS number of model layers to offload to VRAM -ngl --n-gpu-layers

To run slow tests:

SLOW_TESTS=1 ./tests.sh

To run with stdout/stderr display in real time (verbose output, but useful for debugging):

DEBUG=1 ./tests.sh -s -v -x

Hint: You can compile and run test in single command, useful for local developement:

cmake --build build -j --target llama-server && ./examples/server/tests/tests.sh

To see all available arguments, please refer to pytest documentation