llama.cpp/ggml/src/ggml-cpu
2024-11-29 16:25:39 +02:00
..
cmake ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
llamafile llamafile : fix include path (#0) 2024-11-16 20:36:26 +02:00
CMakeLists.txt cmake : fix ARM feature detection (#10543) 2024-11-28 14:56:23 +02:00
ggml-cpu-aarch64.c ggml-cpu: fix typo in gemv/gemm iq4_nl_4_4 (#10580) 2024-11-29 14:49:02 +01:00
ggml-cpu-aarch64.h ggml-cpu: support IQ4_NL_4_4 by runtime repack (#10541) 2024-11-28 13:52:03 +01:00
ggml-cpu-impl.h ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-cpu-quants.c ggml : fix I8MM Q4_1 scaling factor conversion (#10562) 2024-11-29 16:25:39 +02:00
ggml-cpu-quants.h ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-cpu.c ggml : fix I8MM Q4_1 scaling factor conversion (#10562) 2024-11-29 16:25:39 +02:00
ggml-cpu.cpp ggml-cpu: support IQ4_NL_4_4 by runtime repack (#10541) 2024-11-28 13:52:03 +01:00