llama.cpp/.devops
2024-12-22 23:22:58 +01:00
..
nix nix: allow to override rocm gpu targets (#10794) 2024-12-14 10:17:36 -08:00
cloud-v-pipeline build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
cpu.Dockerfile devops : add docker-multi-stage builds (#10832) 2024-12-22 23:22:58 +01:00
cuda.Dockerfile devops : add docker-multi-stage builds (#10832) 2024-12-22 23:22:58 +01:00
intel.Dockerfile devops : add docker-multi-stage builds (#10832) 2024-12-22 23:22:58 +01:00
llama-cli-cann.Dockerfile docker: use GGML_NATIVE=OFF (#10368) 2024-11-18 00:21:53 +01:00
llama-cpp-cuda.srpm.spec devops : remove clblast + LLAMA_CUDA -> GGML_CUDA (#8139) 2024-06-26 19:32:07 +03:00
llama-cpp.srpm.spec build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
musa.Dockerfile devops : add docker-multi-stage builds (#10832) 2024-12-22 23:22:58 +01:00
rocm.Dockerfile devops : add docker-multi-stage builds (#10832) 2024-12-22 23:22:58 +01:00
tools.sh fix: graceful shutdown for Docker images (#10815) 2024-12-13 18:23:50 +01:00
vulkan.Dockerfile devops : add docker-multi-stage builds (#10832) 2024-12-22 23:22:58 +01:00