llama.cpp/tests
Georgi Gerganov 644fd71b44
sampling : refactor + optimize penalties sampler (#10803)
* sampling : refactor + optimize penalties sampler

ggml-ci

* common : apply ignore_eos as logit bias

ggml-ci

* batched : remove penalties sampler

* params : allow penalty_last_n == -1 to be equal to context size

ggml-ci

* common : by default, move the penalties at the end of the sampling chain

ggml-ci

* common : ignore all EOG tokens

Co-authored-by: Diego Devesa <slarengh@gmail.com>

* common : move back the penalties at the front of the sampling chain

ggml-ci

* readme : restore hint about --ignore-eos flag [no ci]

* llama : minor

ggml-ci

* webui : update

---------

Co-authored-by: Diego Devesa <slarengh@gmail.com>
2024-12-16 12:31:14 +02:00
..
.gitignore tests : gitignore ggml-common.h 2024-03-09 14:17:11 +02:00
CMakeLists.txt remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (#10797) 2024-12-12 19:02:49 +01:00
get-model.cpp ci : add model tests + script wrapper (#4586) 2024-01-26 14:18:00 +02:00
get-model.h ci : add model tests + script wrapper (#4586) 2024-01-26 14:18:00 +02:00
run-json-schema-to-grammar.mjs server : revamp chat UI with vuejs and daisyui (#10175) 2024-11-07 17:31:10 -04:00
test-arg-parser.cpp speculative : refactor and add a simpler example (#10362) 2024-11-25 09:58:41 +02:00
test-autorelease.cpp ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
test-backend-ops.cpp llama : add Qwen2VL support + multimodal RoPE (#10361) 2024-12-14 14:43:46 +02:00
test-barrier.cpp ggml : move CPU backend to a separate file (#10144) 2024-11-03 19:34:08 +01:00
test-c.c Nomic Vulkan backend (#4456) 2024-01-29 15:50:50 -05:00
test-chat-template.cpp llama : add Deepseek MoE v1 & GigaChat models (#10827) 2024-12-15 19:02:46 +02:00
test-double-float.cpp ggml : minor naming changes (#8433) 2024-07-12 10:46:02 +03:00
test-grammar-integration.cpp llama : refactor sampling v2 (#9294) 2024-09-07 15:16:19 +03:00
test-grammar-parser.cpp llama : refactor sampling v2 (#9294) 2024-09-07 15:16:19 +03:00
test-json-schema-to-grammar.cpp grammar : fix JSON Schema for string regex with top-level alt. (#9903) 2024-10-16 19:03:24 +03:00
test-llama-grammar.cpp llama : refactor sampling v2 (#9294) 2024-09-07 15:16:19 +03:00
test-log.cpp common : use common_ prefix for common library functions (#9805) 2024-10-10 22:57:42 +02:00
test-lora-conversion-inference.sh Fix HF repo commit to clone lora test models (#10649) 2024-12-04 10:45:48 +01:00
test-model-load-cancel.cpp ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
test-opt.cpp ggml : inttypes.h -> cinttypes (#0) 2024-11-17 08:30:29 +02:00
test-quantize-fns.cpp tests : fix compile warning 2024-11-25 15:17:32 +02:00
test-quantize-perf.cpp ggml : inttypes.h -> cinttypes (#0) 2024-11-17 08:30:29 +02:00
test-rope.cpp llama : add Qwen2VL support + multimodal RoPE (#10361) 2024-12-14 14:43:46 +02:00
test-sampling.cpp sampling : refactor + optimize penalties sampler (#10803) 2024-12-16 12:31:14 +02:00
test-tokenizer-0.cpp common : use common_ prefix for common library functions (#9805) 2024-10-10 22:57:42 +02:00
test-tokenizer-0.py py : logging and flake8 suppression refactoring (#7081) 2024-05-05 08:07:48 +03:00
test-tokenizer-0.sh tests : fix test-tokenizer-0.sh 2024-05-28 15:04:09 +03:00
test-tokenizer-1-bpe.cpp common : use common_ prefix for common library functions (#9805) 2024-10-10 22:57:42 +02:00
test-tokenizer-1-spm.cpp common : use common_ prefix for common library functions (#9805) 2024-10-10 22:57:42 +02:00
test-tokenizer-random.py llama : fix pre-tokenization of non-special added tokens (#8228) 2024-07-13 23:35:10 -04:00