llama.cpp/ggml/src/ggml-cuda
Andreas Kieslinger 39509fb082
cuda : CUDA Graph Compute Function Refactor (precursor for performance improvements) (#11042)
* Refactor: Moves cuda graph executable update step to separate function.

* Refactor: Moves cuda graph update check to separate function.

* Refactor: Moves cuda graph maintenance (update or adjusting copy parameters) to separate function for improved readability.

* Fix: Adds missing reference to maintain_cuda_graph() definition.

* Refactor: Improves structure and abstractions by moving CUDA graph evaluation and capture to its own function.

* Refactor: Moves node graph checks and copy ops into individual function for improved readability.

* Refactor: Removes code permanently excluded from compilation to increase readability.

* Style: Adds missing newline

* Style: Consolidates several neighboring '#ifdef USE_CUDA_GRAPH' into a single one

* Refactor: Makes 'cuda_graph_update_required' a local variable

* remove double lines between functions

---------

Co-authored-by: slaren <slarengh@gmail.com>
2025-01-13 16:45:53 +01:00
..
template-instances CUDA: MMQ code deduplication + iquant support (#8495) 2024-07-20 22:25:26 +02:00
vendors CUDA: add BF16 support (#11093) 2025-01-06 02:33:52 +01:00
acc.cu llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
acc.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
arange.cu llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
arange.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
argmax.cu cuda : optimize argmax (#10441) 2024-11-21 18:18:50 +01:00
argmax.cuh ggml/ex: calculate accuracy in graph, adapt MNIST (ggml/980) 2024-10-03 21:17:26 +03:00
argsort.cu ggml : reduce hash table reset cost (#8698) 2024-07-27 04:41:55 +02:00
argsort.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
binbcast.cu ggml/examples: add backend support for numerical optimization (ggml/949) 2024-09-20 21:15:05 +03:00
binbcast.cuh ggml/examples: add backend support for numerical optimization (ggml/949) 2024-09-20 21:15:05 +03:00
clamp.cu llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
clamp.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
CMakeLists.txt cmake : enable warnings in llama (#10474) 2024-11-26 14:18:08 +02:00
common.cuh CUDA: rename macros to avoid conflicts with WinAPI (#10736) 2024-12-10 18:23:24 +01:00
concat.cu fix: add missing msg in static_assert (#11143) 2025-01-08 20:03:28 +00:00
concat.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
conv-transpose-1d.cu feat: cuda implementation for ggml_conv_transpose_1d (ggml/854) 2024-07-08 12:23:00 +03:00
conv-transpose-1d.cuh feat: cuda implementation for ggml_conv_transpose_1d (ggml/854) 2024-07-08 12:23:00 +03:00
convert.cu CUDA: add BF16 support (#11093) 2025-01-06 02:33:52 +01:00
convert.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
count-equal.cu ggml: fix zero division in ‘dne’ calculation in CUDA COUNT_EQUAL operator when ‘ne’ is small (#10213) 2024-11-09 08:35:46 +01:00
count-equal.cuh ggml/ex: calculate accuracy in graph, adapt MNIST (ggml/980) 2024-10-03 21:17:26 +03:00
cpy.cu cuda: add q8_0->f32 cpy operation (#9571) 2024-09-24 02:14:24 +02:00
cpy.cuh increase cuda_cpy block size (ggml/996) 2024-10-26 10:33:56 +03:00
cross-entropy-loss.cu ggml/examples: add backend support for numerical optimization (ggml/949) 2024-09-20 21:15:05 +03:00
cross-entropy-loss.cuh ggml/examples: add backend support for numerical optimization (ggml/949) 2024-09-20 21:15:05 +03:00
dequantize.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
diagmask.cu llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
diagmask.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
fattn-common.cuh ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
fattn-tile-f16.cu ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
fattn-tile-f16.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
fattn-tile-f32.cu ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
fattn-tile-f32.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
fattn-vec-f16.cuh CUDA: remove unnecessary warp reduce in FA (ggml/1032) 2024-12-03 20:04:49 +02:00
fattn-vec-f32.cuh CUDA: remove unnecessary warp reduce in FA (ggml/1032) 2024-12-03 20:04:49 +02:00
fattn-wmma-f16.cuh ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
fattn.cu CUDA: rename macros to avoid conflicts with WinAPI (#10736) 2024-12-10 18:23:24 +01:00
fattn.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
getrows.cu ggml : reduce hash table reset cost (#8698) 2024-07-27 04:41:55 +02:00
getrows.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
ggml-cuda.cu cuda : CUDA Graph Compute Function Refactor (precursor for performance improvements) (#11042) 2025-01-13 16:45:53 +01:00
gla.cu llama: add support for QRWKV6 model architecture (#11001) 2025-01-10 09:58:08 +08:00
gla.cuh llama: add support for QRWKV6 model architecture (#11001) 2025-01-10 09:58:08 +08:00
im2col.cu CUDA: fix 1D im2col, add tests (ggml/993) 2024-10-23 16:50:02 +03:00
im2col.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
mma.cuh CUDA: rename macros to avoid conflicts with WinAPI (#10736) 2024-12-10 18:23:24 +01:00
mmq.cu CUDA: rename macros to avoid conflicts with WinAPI (#10736) 2024-12-10 18:23:24 +01:00
mmq.cuh CUDA: rename macros to avoid conflicts with WinAPI (#10736) 2024-12-10 18:23:24 +01:00
mmv.cu CUDA: add BF16 support (#11093) 2025-01-06 02:33:52 +01:00
mmv.cuh CUDA: remove DMMV, consolidate F16 mult mat vec (#10318) 2024-11-17 09:09:55 +01:00
mmvq.cu CUDA: rename macros to avoid conflicts with WinAPI (#10736) 2024-12-10 18:23:24 +01:00
mmvq.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
norm.cu ggml : add epsilon as a parameter for group_norm (#8818) 2024-08-06 10:26:46 +03:00
norm.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
opt-step-adamw.cu ggml: new optimization interface (ggml/988) 2024-11-17 08:30:29 +02:00
opt-step-adamw.cuh ggml/examples: add backend support for numerical optimization (ggml/949) 2024-09-20 21:15:05 +03:00
out-prod.cu ggml : fix builds (#0) 2024-09-20 21:15:05 +03:00
out-prod.cuh ggml/examples: add backend support for numerical optimization (ggml/949) 2024-09-20 21:15:05 +03:00
pad.cu llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
pad.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
pool2d.cu llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
pool2d.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
quantize.cu cuda : optimize argmax (#10441) 2024-11-21 18:18:50 +01:00
quantize.cuh CUDA: optimize and refactor MMQ (#8416) 2024-07-11 16:47:47 +02:00
rope.cu llama : add Qwen2VL support + multimodal RoPE (#10361) 2024-12-14 14:43:46 +02:00
rope.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
scale.cu llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
scale.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
softmax.cu llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
softmax.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
sum.cu CUDA: rename macros to avoid conflicts with WinAPI (#10736) 2024-12-10 18:23:24 +01:00
sum.cuh tests: add gradient tests for all backends (ggml/932) 2024-09-08 11:05:55 +03:00
sumrows.cu sync : ggml 2024-08-27 22:41:27 +03:00
sumrows.cuh sync : ggml 2024-08-27 22:41:27 +03:00
tsembd.cu llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
tsembd.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
unary.cu RWKV v6: RWKV_WKV op CUDA implementation (#9454) 2024-09-22 04:29:12 +02:00
unary.cuh RWKV v6: RWKV_WKV op CUDA implementation (#9454) 2024-09-22 04:29:12 +02:00
upscale.cu llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
upscale.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
vecdotq.cuh CUDA: MMQ code deduplication + iquant support (#8495) 2024-07-20 22:25:26 +02:00
wkv6.cu llama: add support for QRWKV6 model architecture (#11001) 2025-01-10 09:58:08 +08:00
wkv6.cuh Optimize RWKV6 Operator Naming and Implement Multi-core CPU/ SYCL Acceleration (#10133) 2024-11-07 15:19:10 +08:00