mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2025-01-13 05:42:22 +01:00
c725f691ea
ggml-ci
llama.cpp/example/parallel
Simplified simulation of serving incoming requests in parallel