mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2025-01-14 14:28:58 +01:00
c8c07d658a
* llama : fix empty batch cause llama_batch_allocr to crash * move batch_allocr inside decode/encode_internal * fix build * add GGML_ASSERT * Apply suggestions from code review Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> |
||
---|---|---|
.. | ||
CMakeLists.txt | ||
llama-grammar.cpp | ||
llama-grammar.h | ||
llama-impl.h | ||
llama-sampling.cpp | ||
llama-sampling.h | ||
llama-vocab.cpp | ||
llama-vocab.h | ||
llama.cpp | ||
unicode-data.cpp | ||
unicode-data.h | ||
unicode.cpp | ||
unicode.h |