mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-27 06:39:25 +01:00
644fd71b44
* sampling : refactor + optimize penalties sampler ggml-ci * common : apply ignore_eos as logit bias ggml-ci * batched : remove penalties sampler * params : allow penalty_last_n == -1 to be equal to context size ggml-ci * common : by default, move the penalties at the end of the sampling chain ggml-ci * common : ignore all EOG tokens Co-authored-by: Diego Devesa <slarengh@gmail.com> * common : move back the penalties at the front of the sampling chain ggml-ci * readme : restore hint about --ignore-eos flag [no ci] * llama : minor ggml-ci * webui : update --------- Co-authored-by: Diego Devesa <slarengh@gmail.com> |
||
---|---|---|
.. | ||
colorthemes.css | ||
completion.js | ||
favicon.ico | ||
index-new.html | ||
index.html | ||
index.js | ||
json-schema-to-grammar.mjs | ||
loading.html | ||
prompt-formats.js | ||
style.css | ||
system-prompts.js | ||
theme-beeninorder.css | ||
theme-ketivah.css | ||
theme-mangotango.css | ||
theme-playground.css | ||
theme-polarnight.css | ||
theme-snowstorm.css |