llama.cpp/ggml/src
Andreas Kieslinger 39509fb082
cuda : CUDA Graph Compute Function Refactor (precursor for performance improvements) (#11042)
* Refactor: Moves cuda graph executable update step to separate function.

* Refactor: Moves cuda graph update check to separate function.

* Refactor: Moves cuda graph maintenance (update or adjusting copy parameters) to separate function for improved readability.

* Fix: Adds missing reference to maintain_cuda_graph() definition.

* Refactor: Improves structure and abstractions by moving CUDA graph evaluation and capture to its own function.

* Refactor: Moves node graph checks and copy ops into individual function for improved readability.

* Refactor: Removes code permanently excluded from compilation to increase readability.

* Style: Adds missing newline

* Style: Consolidates several neighboring '#ifdef USE_CUDA_GRAPH' into a single one

* Refactor: Makes 'cuda_graph_update_required' a local variable

* remove double lines between functions

---------

Co-authored-by: slaren <slarengh@gmail.com>
2025-01-13 16:45:53 +01:00
..
ggml-blas ggml : add support for dynamic loading of backends (#10469) 2024-11-25 15:13:39 +01:00
ggml-cann llama : add Qwen2VL support + multimodal RoPE (#10361) 2024-12-14 14:43:46 +02:00
ggml-cpu llama: add support for QRWKV6 model architecture (#11001) 2025-01-10 09:58:08 +08:00
ggml-cuda cuda : CUDA Graph Compute Function Refactor (precursor for performance improvements) (#11042) 2025-01-13 16:45:53 +01:00
ggml-hip ggml : do not define GGML_USE_CUDA when building with GGML_BACKEND_DL (#11211) 2025-01-13 13:31:41 +02:00
ggml-kompute llama : add Qwen2VL support + multimodal RoPE (#10361) 2024-12-14 14:43:46 +02:00
ggml-metal ggml : do not install metal source when embed library (ggml/1054) 2025-01-04 16:09:53 +02:00
ggml-musa mtgpu: Add MUSA_DOCKER_ARCH in Dockerfiles && update cmake and make (#10516) 2024-11-26 17:00:41 +01:00
ggml-opencl common, examples, ggml : fix MSYS2 GCC compiler errors and warnings when building with LLAMA_CURL=ON and GGML_OPENCL=ON (#11013) 2024-12-31 01:46:06 +01:00
ggml-rpc rpc : code cleanup (#11107) 2025-01-07 08:37:02 +02:00
ggml-sycl llama: add support for QRWKV6 model architecture (#11001) 2025-01-10 09:58:08 +08:00
ggml-vulkan Vulkan: Fix float16 use on devices without float16 support + fix subgroup_size_control validation error (#11161) 2025-01-10 06:39:33 +01:00
CMakeLists.txt GGUF: C++ refactor, backend support, misc fixes (#11030) 2025-01-07 18:01:58 +01:00
ggml-alloc.c ggml : remove return from ggml_gallocr_allocate_node (ggml/1048) 2024-12-17 18:35:49 +02:00
ggml-backend-impl.h ggml : automatic selection of best CPU backend (#10606) 2024-12-01 16:12:41 +01:00
ggml-backend-reg.cpp ggml : allow loading backend with env variable (ggml/1059) 2025-01-08 13:40:18 +02:00
ggml-backend.cpp ggml-backend : only offload from host buffers (fix) (#11124) 2025-01-07 16:11:57 +01:00
ggml-common.h CUDA: rename macros to avoid conflicts with WinAPI (#10736) 2024-12-10 18:23:24 +01:00
ggml-impl.h GGUF: C++ refactor, backend support, misc fixes (#11030) 2025-01-07 18:01:58 +01:00
ggml-opt.cpp ggml-opt: fix data corruption (ggml/1022) 2024-11-21 09:22:02 +02:00
ggml-quants.c ggml : refactor online repacking (#10446) 2024-12-07 14:37:50 +02:00
ggml-quants.h ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-threading.cpp ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-threading.h remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (#10797) 2024-12-12 19:02:49 +01:00
ggml.c llama: add support for QRWKV6 model architecture (#11001) 2025-01-10 09:58:08 +08:00
gguf.cpp GGUF: C++ refactor, backend support, misc fixes (#11030) 2025-01-07 18:01:58 +01:00