mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2025-01-10 12:30:50 +01:00
116efee0ee
llama: enable K-shift for quantized KV cache It will fail on unsupported backends or quant types.