mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2025-01-04 18:17:54 +01:00
0a4ce78681
* common : Changed tuple to struct (TODO fix) Use struct `llama_init_result` to replace the previous std::tuple<struct llama_model *, struct llama_context *> * delete llama_init_default_params() * delete the extra whitespace |
||
---|---|---|
.. | ||
CMakeLists.txt | ||
parallel.cpp | ||
README.md |
llama.cpp/example/parallel
Simplified simulation of serving incoming requests in parallel