.. |
nix
|
nix: allow to override rocm gpu targets (#10794)
|
2024-12-14 10:17:36 -08:00 |
cloud-v-pipeline
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
full-cuda.Dockerfile
|
docker: use GGML_NATIVE=OFF (#10368)
|
2024-11-18 00:21:53 +01:00 |
full-musa.Dockerfile
|
mtgpu: Add MUSA_DOCKER_ARCH in Dockerfiles && update cmake and make (#10516)
|
2024-11-26 17:00:41 +01:00 |
full-rocm.Dockerfile
|
Fix Docker ROCM builds, use AMDGPU_TARGETS instead of GPU_TARGETS (#9641)
|
2024-09-30 20:57:12 +02:00 |
full.Dockerfile
|
ggml : add predefined list of CPU backend variants to build (#10626)
|
2024-12-04 14:45:40 +01:00 |
llama-cli-cann.Dockerfile
|
docker: use GGML_NATIVE=OFF (#10368)
|
2024-11-18 00:21:53 +01:00 |
llama-cli-cuda.Dockerfile
|
docker: use GGML_NATIVE=OFF (#10368)
|
2024-11-18 00:21:53 +01:00 |
llama-cli-intel.Dockerfile
|
docker: use GGML_NATIVE=OFF (#10368)
|
2024-11-18 00:21:53 +01:00 |
llama-cli-musa.Dockerfile
|
mtgpu: Add MUSA_DOCKER_ARCH in Dockerfiles && update cmake and make (#10516)
|
2024-11-26 17:00:41 +01:00 |
llama-cli-rocm.Dockerfile
|
Fix Docker ROCM builds, use AMDGPU_TARGETS instead of GPU_TARGETS (#9641)
|
2024-09-30 20:57:12 +02:00 |
llama-cli-vulkan.Dockerfile
|
docker: use GGML_NATIVE=OFF (#10368)
|
2024-11-18 00:21:53 +01:00 |
llama-cli.Dockerfile
|
ggml : add predefined list of CPU backend variants to build (#10626)
|
2024-12-04 14:45:40 +01:00 |
llama-cpp-cuda.srpm.spec
|
devops : remove clblast + LLAMA_CUDA -> GGML_CUDA (#8139)
|
2024-06-26 19:32:07 +03:00 |
llama-cpp.srpm.spec
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
llama-server-cuda.Dockerfile
|
docker: use GGML_NATIVE=OFF (#10368)
|
2024-11-18 00:21:53 +01:00 |
llama-server-intel.Dockerfile
|
docker: use GGML_NATIVE=OFF (#10368)
|
2024-11-18 00:21:53 +01:00 |
llama-server-musa.Dockerfile
|
mtgpu: Add MUSA_DOCKER_ARCH in Dockerfiles && update cmake and make (#10516)
|
2024-11-26 17:00:41 +01:00 |
llama-server-rocm.Dockerfile
|
Fix Docker ROCM builds, use AMDGPU_TARGETS instead of GPU_TARGETS (#9641)
|
2024-09-30 20:57:12 +02:00 |
llama-server-vulkan.Dockerfile
|
docker: use GGML_NATIVE=OFF (#10368)
|
2024-11-18 00:21:53 +01:00 |
llama-server.Dockerfile
|
ggml : add predefined list of CPU backend variants to build (#10626)
|
2024-12-04 14:45:40 +01:00 |
tools.sh
|
fix: graceful shutdown for Docker images (#10815)
|
2024-12-13 18:23:50 +01:00 |