llama.cpp/ggml
2025-01-24 21:02:43 +01:00
..
include rpc : early register backend devices (#11262) 2025-01-17 10:57:09 +02:00
src CUDA: fix FP16 cuBLAS GEMM (#11396) 2025-01-24 21:02:43 +01:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt cmake : avoid -march=native when reproducible build is wanted (#11366) 2025-01-24 13:21:35 +02:00