mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-25 13:58:46 +01:00
7cc2d2c889
* ggml : move AMX to the CPU backend --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> |
||
---|---|---|
.. | ||
CMakeLists.txt | ||
README.md | ||
simple-chat.cpp |
llama.cpp/example/simple-chat
The purpose of this example is to demonstrate a minimal usage of llama.cpp to create a simple chat program using the chat template from the GGUF file.
./llama-simple-chat -m Meta-Llama-3.1-8B-Instruct.gguf -c 2048
...