Diego Devesa 7cc2d2c889
ggml : move AMX to the CPU backend (#10570)
* ggml : move AMX to the CPU backend

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-11-29 21:54:58 +01:00
..
2024-11-25 22:56:24 +01:00
2024-11-25 22:56:24 +01:00

llama.cpp/example/run

The purpose of this example is to demonstrate a minimal usage of llama.cpp for running models.

./llama-run Meta-Llama-3.1-8B-Instruct.gguf
...