mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2025-02-06 00:20:34 +01:00
![Molly Sophia](/assets/img/avatar_default.png)
* ggml: CUDA unary op EXP Signed-off-by: Molly Sophia <mollysophia379@gmail.com> * ggml: rwkv_wkv op CUDA impl Signed-off-by: Molly Sophia <mollysophia379@gmail.com> --------- Signed-off-by: Molly Sophia <mollysophia379@gmail.com>
6 lines
135 B
Plaintext
6 lines
135 B
Plaintext
#include "common.cuh"
|
|
|
|
#define CUDA_WKV_BLOCK_SIZE 64
|
|
|
|
void ggml_cuda_op_rwkv_wkv(ggml_backend_cuda_context & ctx, ggml_tensor * dst);
|