mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-27 06:39:25 +01:00
0e89203b51
* sampling : one sequence per sampling context ggml-ci * speculative : add tree-based sampling support ggml-ci * speculative : reuse the n_parallel CLI param * speculative : refactor sampling * examples : fix build after sampling refactoring ggml-ci * batched : fix n_seq_id * sampling : fix malloc ggml-ci * swift : fix build ggml-ci * swift : try to fix build ggml-ci * prompts : add assistant.txt * common : add llama_batch_add() and llama_batch_clear() helpers * speculative : minor refactor ggml-ci * minor : comments + rename ggml-ci * speculative : fix off-by-one for n_drafted * speculative : fix the n_drafted fix + p constants |
||
---|---|---|
.. | ||
alpaca.txt | ||
assistant.txt | ||
chat-with-baichuan.txt | ||
chat-with-bob.txt | ||
chat-with-vicuna-v0.txt | ||
chat-with-vicuna-v1.txt | ||
chat.txt | ||
dan-modified.txt | ||
dan.txt | ||
LLM-questions.txt | ||
mnemonics.txt | ||
parallel-questions.txt | ||
reason-act.txt |