llama.cpp/examples/server/tests
Olivier Chafik 8843a98c2b
Improve usability of --model-url & related flags (#6930)
* args: default --model to models/ + filename from --model-url or --hf-file (or else legacy models/7B/ggml-model-f16.gguf)

* args: main & server now call gpt_params_handle_model_default

* args: define DEFAULT_MODEL_PATH + update cli docs

* curl: check url of previous download (.json metadata w/ url, etag & lastModified)

* args: fix update to quantize-stats.cpp

* curl: support legacy .etag / .lastModified companion files

* curl: rm legacy .etag file support

* curl: reuse regex across headers callback calls

* curl: unique_ptr to manage lifecycle of curl & outfile

* curl: nit: no need for multiline regex flag

* curl: update failed test (model file collision) + gitignore *.gguf.json
2024-04-30 00:52:50 +01:00
..
features Improve usability of --model-url & related flags (#6930) 2024-04-30 00:52:50 +01:00
README.md doc : server tests require llama to be built with curl enabled (#6788) 2024-04-20 18:29:50 +02:00
requirements.txt server tests : more pythonic process management; fix bare except: (#6146) 2024-03-20 06:33:49 +01:00
tests.sh tests : minor bash stuff (#6902) 2024-04-25 14:27:20 +03:00

Server tests

Python based server tests scenario using BDD and behave:

Tests target GitHub workflows job runners with 4 vCPU.

Requests are using aiohttp, asyncio based http client.

Note: If the host architecture inference speed is faster than GitHub runners one, parallel scenario may randomly fail. To mitigate it, you can increase values in n_predict, kv_size.

Install dependencies

pip install -r requirements.txt

Run tests

  1. Build the server
cd ../../..
mkdir build
cd build
cmake -DLLAMA_CURL=ON ../
cmake --build . --target server
  1. Start the test: ./tests.sh

It's possible to override some scenario steps values with environment variables:

variable description
PORT context.server_port to set the listening port of the server during scenario, default: 8080
LLAMA_SERVER_BIN_PATH to change the server binary path, default: ../../../build/bin/server
DEBUG "ON" to enable steps and server verbose mode --verbose
SERVER_LOG_FORMAT_JSON if set switch server logs to json format
N_GPU_LAYERS number of model layers to offload to VRAM -ngl --n-gpu-layers

Run @bug, @wip or @wrong_usage annotated scenario

Feature or Scenario must be annotated with @llama.cpp to be included in the default scope.

  • @bug annotation aims to link a scenario with a GitHub issue.
  • @wrong_usage are meant to show user issue that are actually an expected behavior
  • @wip to focus on a scenario working in progress
  • @slow heavy test, disabled by default

To run a scenario annotated with @bug, start:

DEBUG=ON ./tests.sh --no-skipped --tags bug --stop

After changing logic in steps.py, ensure that @bug and @wrong_usage scenario are updated.

./tests.sh --no-skipped --tags bug,wrong_usage || echo "should failed but compile"