1
0
mirror of https://github.com/ggerganov/llama.cpp.git synced 2025-01-25 10:58:56 +01:00
llama.cpp/examples/server/tests/unit
Georgi Gerganov e6e7c75d94
server : fix extra BOS in infill endpoint ()
* server : fix extra BOS in infill endpoing

ggml-ci

* server : update infill tests
2025-01-06 15:36:08 +02:00
..
test_basic.py
test_chat_completion.py server : clean up built-in template detection () 2024-12-31 15:22:01 +01:00
test_completion.py server : add OAI compat for /v1/completions () 2024-12-31 12:34:13 +01:00
test_ctx_shift.py
test_embedding.py server : add support for "encoding_format": "base64" to the */embeddings endpoints () 2024-12-24 21:33:04 +01:00
test_infill.py server : fix extra BOS in infill endpoint () 2025-01-06 15:36:08 +02:00
test_lora.py server : allow using LoRA adapters per-request () 2025-01-02 15:05:18 +01:00
test_rerank.py
test_security.py
test_slot_save.py
test_speculative.py server : allow using LoRA adapters per-request () 2025-01-02 15:05:18 +01:00
test_tokenize.py