llama.cpp/ggml
2025-01-11 21:06:49 -05:00
..
include GGUF: C++ refactor, backend support, misc fixes (#11030) 2025-01-07 18:01:58 +01:00
src ggml-cuda : use i and j instead of i0 and i in vec_dot_tq2_0_q8_1 2025-01-11 21:06:49 -05:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt GGUF: C++ refactor, backend support, misc fixes (#11030) 2025-01-07 18:01:58 +01:00