mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-11-01 07:30:17 +01:00
7082d24cec
* initial commit, going through initializations * main loop finished, starting to debug * BUG: generates gibberish/repeating tokens after a while * kv_cache management * Added colors to distinguish drafted tokens (--color). Updated README * lookup : fix token positions in the draft batch * lookup : use n_draft from CLI params * lookup : final touches --------- Co-authored-by: Leon Ericsson <leon.ericsson@icloud.com> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
14 lines
488 B
Markdown
14 lines
488 B
Markdown
# llama.cpp/examples/lookup
|
|
|
|
Demonstration of Prompt Lookup Decoding
|
|
|
|
https://github.com/apoorvumang/prompt-lookup-decoding
|
|
|
|
The key parameters for lookup decoding are `ngram_min`, `ngram_max` and `n_draft`. The first two determine the size of the ngrams to search for in the prompt for a match. The latter specifies how many subsequent tokens to draft if a match is found.
|
|
|
|
More info:
|
|
|
|
https://github.com/ggerganov/llama.cpp/pull/4484
|
|
https://github.com/ggerganov/llama.cpp/issues/4226
|
|
|