This website requires JavaScript.
Explore
Help
Register
Sign In
Mirrors
/
llama.cpp
Watch
1
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggerganov/llama.cpp.git
synced
2025-01-10 04:20:24 +01:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
llama.cpp
/
.devops
History
Brandon Squizzato
0d6fb52be0
Install curl in runtime layer (
#8693
)
2024-08-04 20:17:16 +02:00
..
nix
nix: cuda: rely on propagatedBuildInputs (
#8772
)
2024-07-30 13:35:30 -07:00
cloud-v-pipeline
…
full-cuda.Dockerfile
…
full-rocm.Dockerfile
…
full.Dockerfile
…
llama-cli-cuda.Dockerfile
…
llama-cli-intel.Dockerfile
Build Llama SYCL Intel with static libs (
#8668
)
2024-07-24 14:36:00 +01:00
llama-cli-rocm.Dockerfile
…
llama-cli-vulkan.Dockerfile
…
llama-cli.Dockerfile
…
llama-cpp-cuda.srpm.spec
…
llama-cpp.srpm.spec
…
llama-server-cuda.Dockerfile
…
llama-server-intel.Dockerfile
Build Llama SYCL Intel with static libs (
#8668
)
2024-07-24 14:36:00 +01:00
llama-server-rocm.Dockerfile
…
llama-server-vulkan.Dockerfile
…
llama-server.Dockerfile
Install curl in runtime layer (
#8693
)
2024-08-04 20:17:16 +02:00
tools.sh
examples : remove
finetune
and
train-text-from-scratch
(
#8669
)
2024-07-25 10:39:04 +02:00