mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-30 16:07:17 +01:00
13 lines
487 B
Markdown
13 lines
487 B
Markdown
|
# llama.cpp/examples/lookup
|
||
|
|
||
|
Demonstration of Prompt Lookup Decoding
|
||
|
|
||
|
https://github.com/apoorvumang/prompt-lookup-decoding
|
||
|
|
||
|
The key parameters for lookup decoding are `ngram_min`, `ngram_max` and `n_draft`. The first two determine the size of the ngrams to search for in the prompt for a match. The latter specifies how many subsequent tokens to draft if a match is found.
|
||
|
|
||
|
More info:
|
||
|
|
||
|
https://github.com/ggerganov/llama.cpp/pull/4484
|
||
|
https://github.com/ggerganov/llama.cpp/issues/4226
|