llama.cpp/ggml
Diego Devesa 3420909dff
ggml : automatic selection of best CPU backend (#10606)
* ggml : automatic selection of best CPU backend

* amx : minor opt

* add GGML_AVX_VNNI to enable avx-vnni, fix checks
2024-12-01 16:12:41 +01:00
..
include ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
src ggml : automatic selection of best CPU backend (#10606) 2024-12-01 16:12:41 +01:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt ggml : automatic selection of best CPU backend (#10606) 2024-12-01 16:12:41 +01:00