mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2025-01-10 20:40:24 +01:00
46c69e0e75
* ci : faster CUDA toolkit installation method and use ccache * remove fetch-depth * only pack CUDA runtime on master