llama.cpp/ggml
2024-12-04 01:41:37 +01:00
..
include ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
src Avoid using __fp16 on ARM with old nvcc (#10616) 2024-12-04 01:41:37 +01:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt ggml : automatic selection of best CPU backend (#10606) 2024-12-01 16:12:41 +01:00