1
0
mirror of https://github.com/ggerganov/llama.cpp.git synced 2025-01-16 15:18:26 +01:00
llama.cpp/prompts/chat-with-qwen.txt
Shijie 37c746d687
llama : add Qwen support ()
* enable qwen to llama.cpp

* llama : do not GPU split bias tensors

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-12-01 20:16:31 +02:00

1 line
28 B
Plaintext