llama.cpp/ggml
Radoslav Gerganov 1244cdcf14
ggml : do not define GGML_USE_CUDA when building with GGML_BACKEND_DL (#11211)
Build fails when using HIP and GGML_BACKEND_DL:
```
/usr/bin/ld: ../ggml/src/libggml.so: undefined reference to `ggml_backend_cuda_reg'
collect2: error: ld returned 1 exit status
```
This patch fixes this.
2025-01-13 13:31:41 +02:00
..
include llama: add support for QRWKV6 model architecture (#11001) 2025-01-10 09:58:08 +08:00
src ggml : do not define GGML_USE_CUDA when building with GGML_BACKEND_DL (#11211) 2025-01-13 13:31:41 +02:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt GGUF: C++ refactor, backend support, misc fixes (#11030) 2025-01-07 18:01:58 +01:00