llama.cpp/ggml/src
2024-08-22 22:09:47 +08:00
..
ggml-cann ggml : move rope type enum to ggml.h (#8949) 2024-08-13 21:13:15 +02:00
ggml-cuda ggml : move rope type enum to ggml.h (#8949) 2024-08-13 21:13:15 +02:00
ggml-sycl [SYCL] Add oneDNN primitive support (#9091) 2024-08-22 12:50:10 +08:00
kompute@4565194ed7 llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
kompute-shaders ggml : move rope type enum to ggml.h (#8949) 2024-08-13 21:13:15 +02:00
llamafile ggml : move sgemm sources to llamafile subfolder (#8394) 2024-07-10 15:23:29 +03:00
vulkan-shaders llava: Add ACC OP for GPU acceleration to the Vulkan backend in the LLAVA CLIP model. (#8984) 2024-08-20 21:00:00 +02:00
CMakeLists.txt [SYCL] Add a space to supress a cmake warning (#9133) 2024-08-22 22:09:47 +08:00
ggml-aarch64.c ggml : ignore more msvc warnings (ggml/906) 2024-08-08 13:19:31 +03:00
ggml-aarch64.h ggml : minor naming changes (#8433) 2024-07-12 10:46:02 +03:00
ggml-alloc.c ggml : reduce hash table reset cost (#8698) 2024-07-27 04:41:55 +02:00
ggml-backend-impl.h llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
ggml-backend.c ggml : dynamic ggml_sched_max_splits based on graph_size (#9047) 2024-08-16 04:22:55 +02:00
ggml-blas.cpp ggml : reduce hash table reset cost (#8698) 2024-07-27 04:41:55 +02:00
ggml-cann.cpp [CANN]: Fix ggml_backend_cann_buffer_get_tensor (#8871) 2024-08-06 12:42:42 +08:00
ggml-common.h feat: Support Moore Threads GPU (#8383) 2024-07-28 01:41:25 +02:00
ggml-cuda.cu ggml-backend : fix async copy from CPU (#8897) 2024-08-07 13:29:02 +02:00
ggml-impl.h ggml : reading the runtime sve config of the cpu (#8709) 2024-08-03 18:34:41 +02:00
ggml-kompute.cpp ggml : reduce hash table reset cost (#8698) 2024-07-27 04:41:55 +02:00
ggml-metal.m ggml : move rope type enum to ggml.h (#8949) 2024-08-13 21:13:15 +02:00
ggml-metal.metal ggml : fix quant dot product with odd number of blocks (#8549) 2024-07-19 17:17:27 +02:00
ggml-quants.c ggml : reading the runtime sve config of the cpu (#8709) 2024-08-03 18:34:41 +02:00
ggml-quants.h ggml : reading the runtime sve config of the cpu (#8709) 2024-08-03 18:34:41 +02:00
ggml-rpc.cpp rpc : print error message when failed to connect endpoint (#9042) 2024-08-19 10:11:45 +03:00
ggml-sycl.cpp [SYCL] Add oneDNN primitive support (#9091) 2024-08-22 12:50:10 +08:00
ggml-vulkan.cpp llava: Add ACC OP for GPU acceleration to the Vulkan backend in the LLAVA CLIP model. (#8984) 2024-08-20 21:00:00 +02:00
ggml.c llama : simplify Mamba with advanced batch splits (#8526) 2024-08-21 17:58:11 -04:00