mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2025-01-29 13:24:50 +01:00
caf773f249
* ci : do not fail-fast for docker * build arm64/amd64 separatedly * fix pip * no fast fail * vulkan: try jammy |
||
---|---|---|
.. | ||
nix | ||
cloud-v-pipeline | ||
cpu.Dockerfile | ||
cuda.Dockerfile | ||
intel.Dockerfile | ||
llama-cli-cann.Dockerfile | ||
llama-cpp-cuda.srpm.spec | ||
llama-cpp.srpm.spec | ||
musa.Dockerfile | ||
rocm.Dockerfile | ||
tools.sh | ||
vulkan.Dockerfile |