llama.cpp/ggml
2025-01-24 12:38:31 +01:00
..
include rpc : early register backend devices (#11262) 2025-01-17 10:57:09 +02:00
src CPU/CUDA: fix (GQA) mul mat back, add CUDA support (#11380) 2025-01-24 12:38:31 +01:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt cmake : avoid -march=native when reproducible build is wanted (#11366) 2025-01-24 13:21:35 +02:00