mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-10-31 07:00:16 +01:00
930b178026
* server: logs - always use JSON logger, add add thread_id in message, log task_id and slot_id * server : skip GH copilot requests from logging * server : change message format of server_log() * server : no need to repeat log in comment * server : log style consistency * server : fix compile warning * server : fix tests regex patterns on M2 Ultra * server: logs: PR feedback on log level * server: logs: allow to choose log format in json or plain text * server: tests: output server logs in text * server: logs switch init logs to server logs macro * server: logs ensure value json value does not raised error * server: logs reduce level VERBOSE to VERB to max 4 chars * server: logs lower case as other log messages * server: logs avoid static in general Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * server: logs PR feedback: change text log format to: LEVEL [function_name] message | additional=data --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2.2 KiB
2.2 KiB
Server tests
Python based server tests scenario using BDD and behave:
- issues.feature Pending issues scenario
- parallel.feature Scenario involving multi slots and concurrent requests
- security.feature Security, CORS and API Key
- server.feature Server base scenario: completion, embedding, tokenization, etc...
Tests target GitHub workflows job runners with 4 vCPU.
Requests are using aiohttp, asyncio based http client.
Note: If the host architecture inference speed is faster than GitHub runners one, parallel scenario may randomly fail. To mitigate it, you can increase values in n_predict
, kv_size
.
Install dependencies
pip install -r requirements.txt
Run tests
- Build the server
cd ../../..
mkdir build
cd build
cmake ../
cmake --build . --target server
- download required models:
../../../scripts/hf.sh --repo ggml-org/models --file tinyllamas/stories260K.gguf
- Start the test:
./tests.sh
It's possible to override some scenario steps values with environment variables:
PORT
->context.server_port
to set the listening port of the server during scenario, default:8080
LLAMA_SERVER_BIN_PATH
-> to change the server binary path, default:../../../build/bin/server
DEBUG
-> "ON" to enable steps and server verbose mode--verbose
SERVER_LOG_FORMAT_JSON
-> if set switch server logs to json format
Run @bug, @wip or @wrong_usage annotated scenario
Feature or Scenario must be annotated with @llama.cpp
to be included in the default scope.
@bug
annotation aims to link a scenario with a GitHub issue.@wrong_usage
are meant to show user issue that are actually an expected behavior@wip
to focus on a scenario working in progress
To run a scenario annotated with @bug
, start:
DEBUG=ON ./tests.sh --no-skipped --tags bug
After changing logic in steps.py
, ensure that @bug
and @wrong_usage
scenario are updated.