llama.cpp/examples/server/tests/unit
Xuan Son Nguyen 45095a61bf
server : clean up built-in template detection (#11026)
* server : clean up built-in template detection

* fix compilation

* add chat template test

* fix condition
2024-12-31 15:22:01 +01:00
..
test_basic.py server : add flag to disable the web-ui (#10762) (#10751) 2024-12-10 18:22:34 +01:00
test_chat_completion.py server : clean up built-in template detection (#11026) 2024-12-31 15:22:01 +01:00
test_completion.py server : add OAI compat for /v1/completions (#10974) 2024-12-31 12:34:13 +01:00
test_ctx_shift.py server : replace behave with pytest (#10416) 2024-11-26 16:20:18 +01:00
test_embedding.py server : add support for "encoding_format": "base64" to the */embeddings endpoints (#10967) 2024-12-24 21:33:04 +01:00
test_infill.py server : fix format_infill (#10724) 2024-12-08 23:04:29 +01:00
test_lora.py server : replace behave with pytest (#10416) 2024-11-26 16:20:18 +01:00
test_rerank.py server : fill usage info in embeddings and rerank responses (#10852) 2024-12-17 18:00:24 +02:00
test_security.py server : replace behave with pytest (#10416) 2024-11-26 16:20:18 +01:00
test_slot_save.py server : replace behave with pytest (#10416) 2024-11-26 16:20:18 +01:00
test_speculative.py server : fix speculative decoding with context shift (#10641) 2024-12-04 22:38:20 +02:00
test_tokenize.py server : replace behave with pytest (#10416) 2024-11-26 16:20:18 +01:00