llama.cpp/src
Ivan 116efee0ee
cuda: add q8_0->f32 cpy operation (#9571)
llama: enable K-shift for quantized KV cache
It will fail on unsupported backends or quant types.
2024-09-24 02:14:24 +02:00
..
CMakeLists.txt llama : move vocab, grammar and sampling into separate files (#8508) 2024-07-23 13:10:17 +03:00
llama-grammar.cpp llama : refactor sampling v2 (#9294) 2024-09-07 15:16:19 +03:00
llama-grammar.h llama : refactor sampling v2 (#9294) 2024-09-07 15:16:19 +03:00
llama-impl.h common : reimplement logging (#9418) 2024-09-15 20:46:12 +03:00
llama-sampling.cpp llama : use reserve/emplace_back in sampler_sample (#9534) 2024-09-18 14:42:36 +03:00
llama-sampling.h llama : refactor samplers internal implementation (#9370) 2024-09-08 15:52:07 +02:00
llama-vocab.cpp llama : support RWKV v6 models (#8980) 2024-09-01 17:38:17 +03:00
llama-vocab.h llama : refactor sampling v2 (#9294) 2024-09-07 15:16:19 +03:00
llama.cpp cuda: add q8_0->f32 cpy operation (#9571) 2024-09-24 02:14:24 +02:00
unicode-data.cpp Removes multiple newlines at the end of files that is breaking the editorconfig step of CI. (#8258) 2024-07-02 12:18:10 -04:00
unicode-data.h llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
unicode.cpp unicode : add <algorithm> (#9508) 2024-09-17 09:51:15 +03:00
unicode.h llama : move vocab, grammar and sampling into separate files (#8508) 2024-07-23 13:10:17 +03:00