..
template-instances
CUDA: MMQ code deduplication + iquant support ( #8495 )
2024-07-20 22:25:26 +02:00
vendors
musa: enable building fat binaries, enable unified memory, and disable Flash Attention on QY1 (MTT S80) ( #9526 )
2024-09-22 16:55:49 +02:00
acc.cu
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
acc.cuh
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
arange.cu
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
arange.cuh
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
argmax.cu
ggml/ex: calculate accuracy in graph, adapt MNIST (ggml/980)
2024-10-03 21:17:26 +03:00
argmax.cuh
ggml/ex: calculate accuracy in graph, adapt MNIST (ggml/980)
2024-10-03 21:17:26 +03:00
argsort.cu
ggml : reduce hash table reset cost ( #8698 )
2024-07-27 04:41:55 +02:00
argsort.cuh
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
binbcast.cu
ggml/examples: add backend support for numerical optimization (ggml/949)
2024-09-20 21:15:05 +03:00
binbcast.cuh
ggml/examples: add backend support for numerical optimization (ggml/949)
2024-09-20 21:15:05 +03:00
clamp.cu
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
clamp.cuh
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
common.cuh
ggml/ex: calculate accuracy in graph, adapt MNIST (ggml/980)
2024-10-03 21:17:26 +03:00
concat.cu
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
concat.cuh
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
conv-transpose-1d.cu
feat: cuda implementation for ggml_conv_transpose_1d
(ggml/854)
2024-07-08 12:23:00 +03:00
conv-transpose-1d.cuh
feat: cuda implementation for ggml_conv_transpose_1d
(ggml/854)
2024-07-08 12:23:00 +03:00
convert.cu
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
convert.cuh
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
count-equal.cu
ggml/ex: calculate accuracy in graph, adapt MNIST (ggml/980)
2024-10-03 21:17:26 +03:00
count-equal.cuh
ggml/ex: calculate accuracy in graph, adapt MNIST (ggml/980)
2024-10-03 21:17:26 +03:00
cpy.cu
cuda: add q8_0->f32 cpy operation ( #9571 )
2024-09-24 02:14:24 +02:00
cpy.cuh
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
cross-entropy-loss.cu
ggml/examples: add backend support for numerical optimization (ggml/949)
2024-09-20 21:15:05 +03:00
cross-entropy-loss.cuh
ggml/examples: add backend support for numerical optimization (ggml/949)
2024-09-20 21:15:05 +03:00
dequantize.cuh
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
diagmask.cu
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
diagmask.cuh
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
dmmv.cu
Vectorize load instructions in dmmv f16 CUDA kernel ( #9816 )
2024-10-14 02:49:08 +02:00
dmmv.cuh
cuda : fix dmmv cols requirement to 2*GGML_CUDA_DMMV_X ( #8800 )
2024-08-01 15:26:22 +02:00
fattn-common.cuh
CPU/CUDA: Gemma 2 FlashAttention support ( #8542 )
2024-08-24 21:34:59 +02:00
fattn-tile-f16.cu
ggml/ex: calculate accuracy in graph, adapt MNIST (ggml/980)
2024-10-03 21:17:26 +03:00
fattn-tile-f16.cuh
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
fattn-tile-f32.cu
musa: enable building fat binaries, enable unified memory, and disable Flash Attention on QY1 (MTT S80) ( #9526 )
2024-09-22 16:55:49 +02:00
fattn-tile-f32.cuh
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
fattn-vec-f16.cuh
ggml/ex: calculate accuracy in graph, adapt MNIST (ggml/980)
2024-10-03 21:17:26 +03:00
fattn-vec-f32.cuh
CPU/CUDA: Gemma 2 FlashAttention support ( #8542 )
2024-08-24 21:34:59 +02:00
fattn-wmma-f16.cuh
CPU/CUDA: Gemma 2 FlashAttention support ( #8542 )
2024-08-24 21:34:59 +02:00
fattn.cu
CUDA: enable Gemma FA for HIP/Pascal ( #9581 )
2024-09-22 09:34:52 +02:00
fattn.cuh
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
getrows.cu
ggml : reduce hash table reset cost ( #8698 )
2024-07-27 04:41:55 +02:00
getrows.cuh
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
im2col.cu
CUDA: remove bad assert (ggml/972)
2024-09-29 21:15:37 +03:00
im2col.cuh
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
mma.cuh
CUDA: optimize and refactor MMQ ( #8416 )
2024-07-11 16:47:47 +02:00
mmq.cu
CUDA: fix --split-mode row race condition ( #9413 )
2024-09-11 10:22:40 +02:00
mmq.cuh
CUDA: fix --split-mode row race condition ( #9413 )
2024-09-11 10:22:40 +02:00
mmvq.cu
ggml : reduce hash table reset cost ( #8698 )
2024-07-27 04:41:55 +02:00
mmvq.cuh
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
norm.cu
ggml : add epsilon as a parameter for group_norm ( #8818 )
2024-08-06 10:26:46 +03:00
norm.cuh
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
opt-step-adamw.cu
ggml/examples: add backend support for numerical optimization (ggml/949)
2024-09-20 21:15:05 +03:00
opt-step-adamw.cuh
ggml/examples: add backend support for numerical optimization (ggml/949)
2024-09-20 21:15:05 +03:00
out-prod.cu
ggml : fix builds ( #0 )
2024-09-20 21:15:05 +03:00
out-prod.cuh
ggml/examples: add backend support for numerical optimization (ggml/949)
2024-09-20 21:15:05 +03:00
pad.cu
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
pad.cuh
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
pool2d.cu
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
pool2d.cuh
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
quantize.cu
ggml : reduce hash table reset cost ( #8698 )
2024-07-27 04:41:55 +02:00
quantize.cuh
CUDA: optimize and refactor MMQ ( #8416 )
2024-07-11 16:47:47 +02:00
rope.cu
ggml : move rope type enum to ggml.h ( #8949 )
2024-08-13 21:13:15 +02:00
rope.cuh
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
rwkv-wkv.cu
RWKV v6: RWKV_WKV op CUDA implementation ( #9454 )
2024-09-22 04:29:12 +02:00
rwkv-wkv.cuh
RWKV v6: RWKV_WKV op CUDA implementation ( #9454 )
2024-09-22 04:29:12 +02:00
scale.cu
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
scale.cuh
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
softmax.cu
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
softmax.cuh
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
sum.cu
CUDA: fix sum.cu compilation for CUDA < 11.7 ( #9562 )
2024-09-20 18:35:35 +02:00
sum.cuh
tests: add gradient tests for all backends (ggml/932)
2024-09-08 11:05:55 +03:00
sumrows.cu
sync : ggml
2024-08-27 22:41:27 +03:00
sumrows.cuh
sync : ggml
2024-08-27 22:41:27 +03:00
tsembd.cu
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
tsembd.cuh
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
unary.cu
RWKV v6: RWKV_WKV op CUDA implementation ( #9454 )
2024-09-22 04:29:12 +02:00
unary.cuh
RWKV v6: RWKV_WKV op CUDA implementation ( #9454 )
2024-09-22 04:29:12 +02:00
upscale.cu
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
upscale.cuh
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
vecdotq.cuh
CUDA: MMQ code deduplication + iquant support ( #8495 )
2024-07-20 22:25:26 +02:00