llama.cpp/ggml
2025-02-02 23:48:29 +01:00
..
cmake cmake: add ggml find package (#11369) 2025-01-26 12:07:48 -04:00
include CUDA: use mma PTX instructions for FlashAttention (#11583) 2025-02-02 19:31:09 +01:00
src HIP: fix flash_attn_stream_k_fixup warning (#11604) 2025-02-02 23:48:29 +01:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt cmake: add ggml find package (#11369) 2025-01-26 12:07:48 -04:00