mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-25 05:48:47 +01:00
39baaf55a1
* feat: add Dockerfiles for each platform that user ./server instead of ./main * feat: update .github/workflows/docker.yml to build server-first docker containers * doc: add information about running the server with Docker to README.md * doc: add information about running with docker to the server README * doc: update n-gpu-layers to show correct GPU usage * fix(doc): update container tag from `server` to `server-cuda` for README example on running server container with CUDA |
||
---|---|---|
.. | ||
nix | ||
cloud-v-pipeline | ||
full-cuda.Dockerfile | ||
full-rocm.Dockerfile | ||
full.Dockerfile | ||
llama-cpp-clblast.srpm.spec | ||
llama-cpp-cublas.srpm.spec | ||
llama-cpp.srpm.spec | ||
main-cuda.Dockerfile | ||
main-intel.Dockerfile | ||
main-rocm.Dockerfile | ||
main.Dockerfile | ||
server-cuda.Dockerfile | ||
server-intel.Dockerfile | ||
server-rocm.Dockerfile | ||
server.Dockerfile | ||
tools.sh |