llama.cpp/ggml/src/ggml-cpu
2024-11-15 22:27:00 +01:00
..
cmake ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
llamafile ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
CMakeLists.txt Make updates to fix issues with clang-cl builds while using AVX512 flags (#10314) 2024-11-15 22:27:00 +01:00
ggml-cpu-aarch64.c backend cpu: add online flow for aarch64 Q4_0 GEMV/GEMM kernels (#9921) 2024-11-15 01:28:50 +01:00
ggml-cpu-aarch64.h backend cpu: add online flow for aarch64 Q4_0 GEMV/GEMM kernels (#9921) 2024-11-15 01:28:50 +01:00
ggml-cpu-impl.h ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-cpu-quants.c AVX BF16 and single scale quant optimizations (#10212) 2024-11-15 12:47:58 +01:00
ggml-cpu-quants.h ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-cpu.c AVX BF16 and single scale quant optimizations (#10212) 2024-11-15 12:47:58 +01:00
ggml-cpu.cpp backend cpu: add online flow for aarch64 Q4_0 GEMV/GEMM kernels (#9921) 2024-11-15 01:28:50 +01:00