llama.cpp/ggml/src
Shupei Fan c202cef168
ggml-cpu: support IQ4_NL_4_4 by runtime repack (#10541)
* ggml-cpu: support IQ4_NL_4_4 by runtime repack

* ggml-cpu: add __ARM_FEATURE_DOTPROD guard
2024-11-28 13:52:03 +01:00
..
ggml-amx ggml : add support for dynamic loading of backends (#10469) 2024-11-25 15:13:39 +01:00
ggml-blas ggml : add support for dynamic loading of backends (#10469) 2024-11-25 15:13:39 +01:00
ggml-cann CANN: Fix SOC_TYPE compile bug (#10519) 2024-11-28 15:25:24 +08:00
ggml-cpu ggml-cpu: support IQ4_NL_4_4 by runtime repack (#10541) 2024-11-28 13:52:03 +01:00
ggml-cuda Add some minimal optimizations for CDNA (#10498) 2024-11-27 17:10:08 +01:00
ggml-hip ggml : add support for dynamic loading of backends (#10469) 2024-11-25 15:13:39 +01:00
ggml-kompute kompute : improve backend to pass test_backend_ops (#10542) 2024-11-28 12:51:38 +01:00
ggml-metal metal : fix group_norm support condition (#0) 2024-11-27 11:22:14 +02:00
ggml-musa mtgpu: Add MUSA_DOCKER_ARCH in Dockerfiles && update cmake and make (#10516) 2024-11-26 17:00:41 +01:00
ggml-rpc ggml : add support for dynamic loading of backends (#10469) 2024-11-25 15:13:39 +01:00
ggml-sycl ggml : add support for dynamic loading of backends (#10469) 2024-11-25 15:13:39 +01:00
ggml-vulkan vulkan: define all quant data structures in types.comp (#10440) 2024-11-27 08:32:54 +01:00
CMakeLists.txt cmake : enable warnings in llama (#10474) 2024-11-26 14:18:08 +02:00
ggml-aarch64.c ggml : optimize Q4_0 into Q4_0_X_Y repack (#10324) 2024-11-16 01:53:37 +01:00
ggml-aarch64.h ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-alloc.c ggml: new optimization interface (ggml/988) 2024-11-17 08:30:29 +02:00
ggml-backend-impl.h ggml : add support for dynamic loading of backends (#10469) 2024-11-25 15:13:39 +01:00
ggml-backend-reg.cpp llama : accept a list of devices to use to offload a model (#10497) 2024-11-25 19:30:06 +01:00
ggml-backend.cpp ggml/sched : do not skip views in pre-assignments 2024-11-21 09:22:05 +02:00
ggml-common.h ggml-cpu: support IQ4_NL_4_4 by runtime repack (#10541) 2024-11-28 13:52:03 +01:00
ggml-impl.h Do not include arm_neon.h when compiling CUDA code (ggml/1028) 2024-11-27 11:10:27 +02:00
ggml-opt.cpp ggml-opt: fix data corruption (ggml/1022) 2024-11-21 09:22:02 +02:00
ggml-quants.c ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-quants.h ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-threading.cpp ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-threading.h ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml.c ggml-cpu: support IQ4_NL_4_4 by runtime repack (#10541) 2024-11-28 13:52:03 +01:00