1
0
mirror of https://github.com/ggerganov/llama.cpp.git synced 2025-01-19 00:18:57 +01:00
llama.cpp/examples/server/tests/features
Georgi Gerganov 1bde94dd02
server : remove self-extend features ()
* server : remove self-extend

ggml-ci

* server : fix context limit check to use slot.n_past

ggml-ci
2024-10-12 16:06:31 +03:00
..
steps server : better security control for public deployments () 2024-10-08 13:27:04 +02:00
ctx_shift.feature server : remove self-extend features () 2024-10-12 16:06:31 +03:00
embeddings.feature llama : add reranking support () 2024-09-28 17:42:03 +03:00
environment.py server tests : more pythonic process management; fix bare except: () 2024-03-20 06:33:49 +01:00
issues.feature server: tests: passkey challenge / self-extend with context shift demo () 2024-03-02 22:00:14 +01:00
lora.feature server : add lora hotswap endpoint (WIP) () 2024-08-06 17:33:39 +02:00
parallel.feature server : simplify state machine for slot () 2024-09-06 23:21:29 +02:00
passkey.feature server : simplify state machine for slot () 2024-09-06 23:21:29 +02:00
rerank.feature llama : add reranking support () 2024-09-28 17:42:03 +03:00
results.feature server : fix temperature + disable some tests () 2024-05-20 22:10:03 +10:00
security.feature server : better security control for public deployments () 2024-10-08 13:27:04 +02:00
server.feature server : Add option to return token pieces in /tokenize endpoint () 2024-09-12 22:30:11 +02:00
slotsave.feature Tokenizer SPM fixes for phi-3 and llama-spm (bugfix) () 2024-05-21 14:39:48 +02:00
wrong_usages.feature server : refactor multitask handling () 2024-09-02 17:11:51 +02:00