llama.cpp/src
Diego Devesa 6374743747
ggml : add backend registry / device interfaces to BLAS backend (#9752)
* ggml : add backend registry / device interfaces to BLAS backend

* fix mmap usage when using host buffers
2024-10-07 21:55:08 +02:00
..
CMakeLists.txt llama : move vocab, grammar and sampling into separate files (#8508) 2024-07-23 13:10:17 +03:00
llama-grammar.cpp llama : refactor sampling v2 (#9294) 2024-09-07 15:16:19 +03:00
llama-grammar.h llama : refactor sampling v2 (#9294) 2024-09-07 15:16:19 +03:00
llama-impl.h log : add CONT level for continuing previous log entry (#9610) 2024-09-24 10:15:35 +03:00
llama-sampling.cpp sampling : avoid expensive softmax during greedy sampling (#9605) 2024-09-24 09:03:17 +03:00
llama-sampling.h llama : refactor samplers internal implementation (#9370) 2024-09-08 15:52:07 +02:00
llama-vocab.cpp llama : add reranking support (#9510) 2024-09-28 17:42:03 +03:00
llama-vocab.h rerank : use [SEP] token instead of [BOS] (#9737) 2024-10-05 15:55:04 +03:00
llama.cpp ggml : add backend registry / device interfaces to BLAS backend (#9752) 2024-10-07 21:55:08 +02:00
unicode-data.cpp llama : reduce compile time and binary size (#9712) 2024-10-02 15:49:55 +02:00
unicode-data.h llama : reduce compile time and binary size (#9712) 2024-10-02 15:49:55 +02:00
unicode.cpp llama : reduce compile time and binary size (#9712) 2024-10-02 15:49:55 +02:00
unicode.h llama : move vocab, grammar and sampling into separate files (#8508) 2024-07-23 13:10:17 +03:00