mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-27 06:39:25 +01:00
9c4c9cc83f
* Move convert.py to examples/convert-no-torch.py * Fix CI, scripts, readme files * convert-no-torch -> convert-legacy-llama * Move vocab thing to vocab.py * Fix convert-no-torch -> convert-legacy-llama * Fix lost convert.py in ci/run.sh * Fix imports * Fix gguf not imported correctly * Fix flake8 complaints * Fix check-requirements.sh * Get rid of ADDED_TOKENS_FILE, FAST_TOKENIZER_FILE * Review fixes
6 lines
99 B
Plaintext
6 lines
99 B
Plaintext
numpy~=1.24.4
|
|
sentencepiece~=0.2.0
|
|
transformers>=4.40.1,<5.0.0
|
|
gguf>=0.1.0
|
|
protobuf>=4.21.0,<5.0.0
|