llama.cpp/examples/parallel
2025-01-20 09:30:23 +02:00
..
CMakeLists.txt ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
parallel.cpp llama : update llama_kv_self API 2025-01-20 09:30:23 +02:00
README.md Fix some documentation typos/grammar mistakes (#4032) 2023-11-11 23:04:58 -07:00

llama.cpp/example/parallel

Simplified simulation of serving incoming requests in parallel