mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-25 13:58:46 +01:00
29ae62d2ae
* llama : fix embeddings ggml-ci * llama : do not use KV cache for non-causal models ggml-ci * embeddings : fix llama_batch_init arg * llama : add pooling switch * llama : distinguish token vs sequence embeddings ggml-ci * llama : assert pooling tensor * llama : simplify causal mask condition ggml-ci * llama : assert input batch with pooling enabled * readme : update API changes list |
||
---|---|---|
.. | ||
base64.hpp | ||
build-info.cpp.in | ||
CMakeLists.txt | ||
common.cpp | ||
common.h | ||
console.cpp | ||
console.h | ||
grammar-parser.cpp | ||
grammar-parser.h | ||
log.h | ||
sampling.cpp | ||
sampling.h | ||
stb_image.h | ||
train.cpp | ||
train.h |