llama.cpp/examples/server/tests/features
Georgi Gerganov 1bde94dd02
server : remove self-extend features (#9860)
* server : remove self-extend

ggml-ci

* server : fix context limit check to use slot.n_past

ggml-ci
2024-10-12 16:06:31 +03:00
..
steps server : better security control for public deployments (#9776) 2024-10-08 13:27:04 +02:00
ctx_shift.feature server : remove self-extend features (#9860) 2024-10-12 16:06:31 +03:00
embeddings.feature llama : add reranking support (#9510) 2024-09-28 17:42:03 +03:00
environment.py
issues.feature
lora.feature
parallel.feature server : simplify state machine for slot (#9283) 2024-09-06 23:21:29 +02:00
passkey.feature server : simplify state machine for slot (#9283) 2024-09-06 23:21:29 +02:00
rerank.feature llama : add reranking support (#9510) 2024-09-28 17:42:03 +03:00
results.feature
security.feature server : better security control for public deployments (#9776) 2024-10-08 13:27:04 +02:00
server.feature server : Add option to return token pieces in /tokenize endpoint (#9108) 2024-09-12 22:30:11 +02:00
slotsave.feature
wrong_usages.feature