llama.cpp/examples/parallel/README.md
2023-09-21 20:10:14 +02:00

4 lines
94 B
Markdown

# llama.cpp/example/parallel
Simplified simluation for serving incoming requests in parallel