llama.cpp/prompts
Shijie 37c746d687
llama : add Qwen support (#4281)
* enable qwen to llama.cpp

* llama : do not GPU split bias tensors

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-12-01 20:16:31 +02:00
..
alpaca.txt
assistant.txt speculative : add tree-based sampling example (#3624) 2023-10-18 16:21:57 +03:00
chat-with-baichuan.txt
chat-with-bob.txt
chat-with-qwen.txt llama : add Qwen support (#4281) 2023-12-01 20:16:31 +02:00
chat-with-vicuna-v0.txt
chat-with-vicuna-v1.txt
chat.txt
dan-modified.txt
dan.txt
LLM-questions.txt parallel : add option to load external prompt file (#3416) 2023-10-06 16:16:38 +03:00
mnemonics.txt prompts : add mnemonics.txt 2023-10-12 09:35:30 +03:00
parallel-questions.txt prompts : fix editorconfig checks after #3416 2023-10-06 16:36:32 +03:00
reason-act.txt