mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-25 05:48:47 +01:00
9c4c9cc83f
* Move convert.py to examples/convert-no-torch.py * Fix CI, scripts, readme files * convert-no-torch -> convert-legacy-llama * Move vocab thing to vocab.py * Fix convert-no-torch -> convert-legacy-llama * Fix lost convert.py in ci/run.sh * Fix imports * Fix gguf not imported correctly * Fix flake8 complaints * Fix check-requirements.sh * Get rid of ADDED_TOKENS_FILE, FAST_TOKENIZER_FILE * Review fixes |
||
---|---|---|
.. | ||
nix | ||
cloud-v-pipeline | ||
full-cuda.Dockerfile | ||
full-rocm.Dockerfile | ||
full.Dockerfile | ||
llama-cpp-clblast.srpm.spec | ||
llama-cpp-cuda.srpm.spec | ||
llama-cpp.srpm.spec | ||
main-cuda.Dockerfile | ||
main-intel.Dockerfile | ||
main-rocm.Dockerfile | ||
main-vulkan.Dockerfile | ||
main.Dockerfile | ||
server-cuda.Dockerfile | ||
server-intel.Dockerfile | ||
server-rocm.Dockerfile | ||
server-vulkan.Dockerfile | ||
server.Dockerfile | ||
tools.sh |