mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-24 13:28:50 +01:00
ac2219fef3
* llama : fix session saving/loading * llama : temp fix for clearing "future" tokens from the KV cache * llama : fix handling of "future" tokens when loading sessions * llama : fix comments for llama_kv_cache API |
||
---|---|---|
.. | ||
CMakeLists.txt | ||
parallel.cpp | ||
README.md |
llama.cpp/example/parallel
Simplified simluation for serving incoming requests in parallel