llama.cpp/ggml/src
Johannes Gäßler fd08255d0d
CUDA: non-contiguous (RMS) norm support (#11659)
* CUDA: non-contiguous (RMS) norm support

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-02-04 22:21:42 +01:00
..
ggml-blas ggml : add support for dynamic loading of backends (#10469) 2024-11-25 15:13:39 +01:00
ggml-cann llama : add Qwen2VL support + multimodal RoPE (#10361) 2024-12-14 14:43:46 +02:00
ggml-cpu ggml-cpu : fix ggml_graph_compute_thread did not terminate on abort. (ggml/1065) 2025-01-29 11:24:51 +02:00
ggml-cuda CUDA: non-contiguous (RMS) norm support (#11659) 2025-02-04 22:21:42 +01:00
ggml-hip HIP: force max threads per block to be 1024 (#11621) 2025-02-04 19:18:38 +01:00
ggml-kompute llama : add Qwen2VL support + multimodal RoPE (#10361) 2024-12-14 14:43:46 +02:00
ggml-metal CUDA: non-contiguous (RMS) norm support (#11659) 2025-02-04 22:21:42 +01:00
ggml-musa CUDA: use mma PTX instructions for FlashAttention (#11583) 2025-02-02 19:31:09 +01:00
ggml-opencl common, examples, ggml : fix MSYS2 GCC compiler errors and warnings when building with LLAMA_CURL=ON and GGML_OPENCL=ON (#11013) 2024-12-31 01:46:06 +01:00
ggml-rpc rpc : better caching of the base buffer pointer (#11331) 2025-01-21 15:06:41 +02:00
ggml-sycl SYCL : SOFTMAX F16 mask support and other fixes (#11261) 2025-01-28 09:56:58 +00:00
ggml-vulkan CUDA: non-contiguous (RMS) norm support (#11659) 2025-02-04 22:21:42 +01:00
CMakeLists.txt ci: use sccache on windows instead of ccache (#11545) 2025-01-31 17:12:40 +00:00
ggml-alloc.c CUDA: backwards pass for misc. ops, add tests (#11257) 2025-01-16 16:43:38 +01:00
ggml-backend-impl.h rpc : early register backend devices (#11262) 2025-01-17 10:57:09 +02:00
ggml-backend-reg.cpp ggml : allow loading backend with env variable (ggml/1059) 2025-01-08 13:40:18 +02:00
ggml-backend.cpp ggml-backend : only offload from host buffers (fix) (#11124) 2025-01-07 16:11:57 +01:00
ggml-common.h CUDA: rename macros to avoid conflicts with WinAPI (#10736) 2024-12-10 18:23:24 +01:00
ggml-impl.h GGUF: C++ refactor, backend support, misc fixes (#11030) 2025-01-07 18:01:58 +01:00
ggml-opt.cpp ggml-opt: fix data corruption (ggml/1022) 2024-11-21 09:22:02 +02:00
ggml-quants.c ggml : refactor online repacking (#10446) 2024-12-07 14:37:50 +02:00
ggml-quants.h ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-threading.cpp ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-threading.h remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (#10797) 2024-12-12 19:02:49 +01:00
ggml.c ggml : add option to not print stack on abort (ggml/1081) 2025-01-29 11:24:53 +02:00
gguf.cpp cmake : add sanitizer flags for llama.cpp (#11279) 2025-01-18 16:18:15 +02:00