# llama.cpp/example/parallel

Simplified simluation for serving incoming requests in parallel