mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2025-01-27 04:23:06 +01:00
e34c5af43f
* ggml-cpu: replace NEON asm with intrinsics in ggml_gemv_q4_0_4x8_q8_0() Signed-off-by: Adrien Gallouët <angt@huggingface.co> * ggml-cpu: format code Signed-off-by: Adrien Gallouët <angt@huggingface.co> --------- Signed-off-by: Adrien Gallouët <angt@huggingface.co> |
||
---|---|---|
.. | ||
amx | ||
cmake | ||
llamafile | ||
CMakeLists.txt | ||
cpu-feats-x86.cpp | ||
ggml-cpu-aarch64.cpp | ||
ggml-cpu-aarch64.h | ||
ggml-cpu-hbm.cpp | ||
ggml-cpu-hbm.h | ||
ggml-cpu-impl.h | ||
ggml-cpu-quants.c | ||
ggml-cpu-quants.h | ||
ggml-cpu-traits.cpp | ||
ggml-cpu-traits.h | ||
ggml-cpu.c | ||
ggml-cpu.cpp |