mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2025-01-21 17:19:23 +01:00
3bcd40b3c5
* rwkv6: rename to wkv6 * rwkv6: support avx2 avx512 armv8 armv9 * rwkv6: update cuda file name * rwkv6: rename params * wkv on sycl * sycl: add some ops * sycl: Enhance OP support judgment * wkv6: drop armv9 and tranfer to GGML style ggml-ci * sync : ggml * update the function to use appropriate types * fix define error * Update ggml/src/ggml-cpu.c * add appropriate asserts * move element-wise functions outside * put the declaration outside the loop * rewrite to be more inline with the common pattern for distributing threads * use recommended way GGML_TENSOR_LOCALS --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> Co-authored-by: Diego Devesa <slarengh@gmail.com> Co-authored-by: Plamen Minev <pacominev@gmail.com> Co-authored-by: Yuri Khrustalev <ykhrustalev@users.noreply.github.com> Co-authored-by: Meng, Hengyu <airdldl@163.com>
12 lines
251 B
C++
12 lines
251 B
C++
#ifndef GGML_SYCL_OUTPROD_HPP
|
|
#define GGML_SYCL_OUTPROD_HPP
|
|
|
|
#include "common.hpp"
|
|
|
|
void ggml_sycl_op_out_prod(ggml_backend_sycl_context& ctx, const ggml_tensor* src0,
|
|
const ggml_tensor* src1, ggml_tensor* dst);
|
|
|
|
|
|
#endif // GGML_SYCL_OUTPROD_HPP
|
|
|