llama.cpp/ggml/include
2024-09-24 10:15:35 +03:00
..
ggml-alloc.h Threadpool: take 2 (#8672) 2024-08-30 01:20:53 +02:00
ggml-backend.h ggml/examples: add backend support for numerical optimization (ggml/949) 2024-09-20 21:15:05 +03:00
ggml-blas.h llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
ggml-cann.h cann: Add host buffer type for Ascend NPU (#9406) 2024-09-12 19:46:43 +08:00
ggml-cuda.h feat: Support Moore Threads GPU (#8383) 2024-07-28 01:41:25 +02:00
ggml-kompute.h llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
ggml-metal.h metal : add abort callback (ggml/905) 2024-08-08 13:19:30 +03:00
ggml-rpc.h llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
ggml-sycl.h llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
ggml-vulkan.h llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
ggml.h log : add CONT level for continuing previous log entry (#9610) 2024-09-24 10:15:35 +03:00